According to the MIT Technology Review, these 10 breakthrough technologies all have staying power. They will affect the economy and our politics, improve medicine, or influence our culture. Some are unfolding now; others will take a decade or more to develop. But you should know about all of them right now.
1. Reversing Paralysis
Scientists are making remarkable progress at using brain implants to restore the freedom of movement that spinal cord injuries take away.
Availability: 10 to 15 years
Scientists of École Polytechnique Fédérale in Lausanne, Switzerland created implants that established a wireless connection between brains and legs for two partly paralyzed monkeys. The brain implant senses the brain activity and passes those signals wirelessly to a second implant in the body below the point of spinal cord injury, which sends impulses to stimulate the muscles, Joinfo.com reports with reference to the publication in Nature magazine.
In this way, the missing link in the chain will be as it was before the trauma. Two rhesus macaques restored movement in their legs within two weeks after injury. One of them was able to walk within six days. The main goal of the scientists is to use this technology for humans with spine injuries, but is going to bring many challenges than using it in monkeys.
The Case results, pending publication in a medical journal, are a part of a broader effort to use implanted electronics to restore various senses and abilities. Besides treating paralysis, scientists hope to use so-called neural prosthetics to reverse blindness with chips placed in the eye, and maybe restore memories lost to Alzheimer’s disease.
And they know it could work. Consider cochlear implants, which use a microphone to relay signals directly to the auditory nerve, routing around non-working parts of the inner ear. Videos of wide-eyed deaf children hearing their mothers for the first time go viral on the Internet every month. More than 250,000 cases of deafness have been treated.
2. Self-Driving Trucks
Tractor-trailers without a human at the wheel will soon barrel onto highways near you. What will this mean for 1.7 million truck drivers in the US?
Availability: 5 to 10 years
Could a computer have done better at the wheel? Or would it have done worse?
We will probably find out in the next few years, because multiple companies are now testing self-driving trucks. Although many technical problems are still unresolved, proponents claim that self-driving trucks will be safer and less costly. “This system often drives better than I do,” says Greg Murphy, who’s been a professional truck driver for 40 years. He now serves as a safety backup driver during tests of self-driving trucks by Otto, a San Francisco company that outfits trucks with the equipment needed to drive themselves.
At first glance, the opportunities and challenges posed by self-driving trucks might seem to merely echo those associated with self-driving cars. But trucks aren’t just long cars. For one thing, the economic rationale for self-driving trucks might be even stronger than the one for driverless cars. Autonomous trucks can coordinate their movements to platoon closely together over long stretches of highway, cutting down on wind drag and saving on fuel. And letting the truck drive itself part of the time figures to help truckers complete their routes sooner.
But the technological obstacles facing autonomous trucks are higher than the ones for self-driving cars. Companies will need to demonstrate that sensors and code can match the situational awareness of a professional trucker—skills honed by years of experience and training in piloting an easily destabilized juggernaut, with the momentum of 25 Honda Accords, in the face of confusing road hazards, poor surface conditions, and unpredictable car drivers.
And perhaps most important, if self-driving trucks do take hold, they figure to be more controversial than self-driving cars. At a time when our politics and economy are already being upended by the threats that automation poses to jobs, self-driving trucks will affect an enormous number of blue-collar workers. There are 1.7 million trucking jobs in the U.S., according to the Bureau of Labor Statistics. Technology is unlikely to replace truckers entirely anytime soon. But it will almost certainly alter the nature of the job, and not necessarily in ways that all would welcome.
3. Paying with Your Face
Face-detecting systems in China now authorize payments, provide access to facilities, and track down criminals. Will other countries follow?
Availability: Now
“Shortly after walking through the door at Face++, a Chinese startup valued at roughly a billion dollars, I see my face, unshaven and looking a bit jet-lagged, flash up on a large screen near the entrance,” Will Knight reports.
“Having been added to a database, my face now provides automatic access to the building. It can also be used to monitor my movements through each room inside. As I tour the offices of Face++ (pronounced “face plus plus”), located in a suburb of Beijing, I see it appear on several more screens, automatically captured from countless angles by the company’s software. On one screen a video shows the software tracking 83 different points on my face simultaneously. It’s a little creepy, but undeniably impressive.
“Over the past few years, computers have become incredibly good at recognizing faces, and the technology is expanding quickly in China in the interest of both surveillance and convenience. Face recognition might transform everything from policing to the way people interact every day with banks, stores, and transportation services.
Technology from Face++ is already being used in several popular apps. It is possible to transfer money through Alipay, a mobile payment app used by more than 120 million people in China, using only your face as credentials. Meanwhile, Didi, China’s dominant ride-hailing company, uses the Face++ software to let passengers confirm that the person behind the wheel is a legitimate driver. (A “liveness” test, designed to prevent anyone from duping the system with a photo, requires people being scanned to move their head or speak while the app scans them.)
“The technology figures to take off in China first because of the country’s attitudes toward surveillance and privacy. Unlike, say, the United States, China has a large centralized database of ID card photos. During my time at Face++, I saw how local governments are using its software to identify suspected criminals in video from surveillance cameras, which are omnipresent in the country. This is especially impressive—albeit somewhat dystopian—because the footage analyzed is far from perfect, and because mug shots or other images on file may be several years old.”
Facial recognition has existed for decades, but only now is it accurate enough to be used in secure financial transactions. The new versions use deep learning, an artificial-intelligence technique that is especially effective for image recognition because it makes a computer zero in on the facial features that will most reliably identify a person.
4. Practical Quantum Computers
Advances at Google, Intel, and several research groups indicate that computers with previously unimaginable power are finally within reach.
Availability: 4-5 years
One of the labs at QuTech, a Dutch research institute, is responsible for some of the world’s most advanced work on quantum computing, but it looks like an HVAC testing facility.
Every year quantum computing comes up as a candidate for this Breakthrough Technologies list, and every year we reach the same conclusion: not yet. Indeed, for years qubits and quantum computers existed mainly on paper, or in fragile experiments to determine their feasibility. (The Canadian company D-Wave Systems has been selling machines it calls quantum computers for a while, using a specialized technology called quantum annealing. The approach, skeptics say, is at best applicable to a very constrained set of computations and might offer no speed advantage over classical systems.) This year, however, a raft of previously theoretical designs are actually being built. Also new this year is the increased availability of corporate funding—from Google, IBM, Intel, and Microsoft, among others—for both research and the development of assorted technologies needed to actually build a working machine: microelectronics, complex circuits, and control software.
The project at Delft, led by Leo Kouwenhoven, a professor who was recently hired by Microsoft, aims to overcome one of the most long-standing obstacles to building quantum computers: the fact that qubits, the basic units of quantum information, are extremely susceptible to noise and therefore error. For qubits to be useful, they must achieve both quantum superposition (a property something like being in two physical states simultaneously) and entanglement (a phenomenon where pairs of qubits are linked so that what happens to one can instantly affect the other, even when they’re physically separated). These delicate conditions are easily upset by the slightest disturbance, like vibrations or fluctuating electric fields.
Quantum computers will be particularly suited to factoring large numbers (making it easy to crack many of today’s encryption techniques and probably providing uncrackable replacements), solving complex optimization problems, and executing machine-learning algorithms. And there will be applications nobody has yet envisioned.
5. The 360-Degree Selfie
Inexpensive cameras that make spherical images are opening a new era in photography and changing the way people share stories.
Availability: Now
Seasonal changes to vegetation fascinate Koen Hufkens. So last fall Hufkens, an ecological researcher at Harvard, devised a system to continuously broadcast images from a Massachusetts forest to a website called VirtualForest.io. And because he used a camera that creates 360° pictures, visitors can do more than just watch the feed; they can use their mouse cursor (on a computer) or finger (on a smartphone or tablet) to pan around the image in a circle or scroll up to view the forest canopy and down to see the ground. If they look at the image through a virtual-reality headset they can rotate the photo by moving their head, intensifying the illusion that they are in the woods.
Hufkens says the project will allow him to document how climate change is affecting leaf development in New England. The total cost? About $550, including $350 for the Ricoh Theta S camera that takes the photos.
Today, anyone can buy a decent 360° camera for less than $500, record a video within minutes, and upload it to Facebook or YouTube. Much of this amateur 360° content is blurry; some of it captures 360 degrees horizontally but not vertically; and most of it is mundane. But the best user-generated 360° photos and videos—such as the Virtual Forest—deepen the viewer’s appreciation of a place or an event.
Journalists from the New York Times and Reuters are using $350 Samsung Gear 360 cameras to produce spherical photos and videos that document anything from hurricane damage in Haiti to a refugee camp in Gaza. One New York Times video that depicts people in Niger fleeing the militant group Boko Haram puts you in the center of a crowd receiving food from aid groups. You start by watching a man heaving sacks off a pickup truck and hearing them thud onto the ground. When you turn your head, you see the throngs that have gathered to claim the food and the makeshift carts they will use to transport it. The 360° format is so compelling that it could become a new standard for raw footage of news events—something that Twitter is trying to encourage by enabling live spherical videos in its Periscope app.
6. Hot Solar Cells
By converting heat to focused beams of light, a new solar device could create cheap and continuous power.
Availability: 10 to 15 years
Solar panels cover a growing number of rooftops, but even decades after they were first developed, the slabs of silicon remain bulky, expensive, and inefficient. Fundamental limitations prevent these conventional photovoltaics from absorbing more than a fraction of the energy in sunlight.
But a team of MIT scientists has built a different sort of solar energy device that uses inventive engineering and advances in materials science to capture far more of the sun’s energy. The trick is to first turn sunlight into heat and then convert it back into light, but now focused within the spectrum that solar cells can use. While various researchers have been working for years on so-called solar thermophotovoltaics, the MIT device is the first one to absorb more energy than its photovoltaic cell alone, demonstrating that the approach could dramatically increase efficiency.
Standard silicon solar cells mainly capture the visual light from violet to red. That and other factors mean that they can never turn more than around 32 percent of the energy in sunlight into electricity. The MIT device is still a crude prototype, operating at just 6.8 percent efficiency—but with various enhancements it could be roughly twice as efficient as conventional photovoltaics.
The key step in creating the device was the development of something called an absorber-emitter. It essentially acts as a light funnel above the solar cells. The absorbing layer is built from solid black carbon nanotubes that capture all the energy in sunlight and convert most of it into heat. As temperatures reach around 1,000 °C, the adjacent emitting layer radiates that energy back out as light, now mostly narrowed to bands that the photovoltaic cells can absorb. The emitter is made from a photonic crystal, a structure that can be designed at the nanoscale to control which wavelengths of light flow through it. Another critical advance was the addition of a highly specialized optical filter that transmits the tailored light while reflecting nearly all the unusable photons back. This “photon recycling” produces more heat, which generates more of the light that the solar cell can absorb, improving the efficiency of the system.
There are some downsides to the MIT team’s approach, including the relatively high cost of certain components. It also currently works only in a vacuum. But the economics should improve as efficiency levels climb, and the researchers now have a clear path to achieving that. “We can further tailor the components now that we’ve improved our understanding of what we need to get to higher efficiencies,” says Evelyn Wang, an associate professor who helped lead the effort.
7. Gene Therapy 2.0
Scientists have solved fundamental problems that were holding back cures for rare hereditary disorders. Next we’ll see if the same approach can take on cancer, heart disease, and other common illnesses.
Availability: Now
When Kala Looks gave birth to fraternal twin boys in January 2015, she and her husband, Philip, had no idea that one of them was harboring a deadly mutation in his genes.
At three months old, their son Levi was diagnosed with severe combined immune deficiency, or SCID, which renders the body defenseless against infections. Levi’s blood had only a few immune cells essential to fighting disease. Soon he would lose them and have no immune system at all.
Kala and Philip frantically began sanitizing their home to keep Levi alive. They got rid of the family cat, sprayed every surface with Lysol, and boiled the twins’ toys in hot water. Philip would strap on a surgical mask when he came home from work.
At first, Kala and Philip thought their only option was to get Levi a bone marrow transplant, but they couldn’t find a match for him. Then they learned about an experimental gene therapy at Boston Children’s Hospital. It was attempting to treat children like Levi by replacing the gene responsible for destroying his immune system.
Levi got an infusion of the therapy into his veins. He has been a normal boy ever since—and he has even grown larger than his twin brother. Babies born with SCID typically didn’t survive past two years old. Now, a one-time treatment offers a cure for patients like Levi Looks.
Researchers have been chasing the dream of gene therapy for decades. The idea is elegant: use an engineered virus to deliver healthy copies of a gene into patients with defective versions. But until recently it had produced more disappointments than successes. The entire field was slowed in 1999 when an 18-year-old patient with a liver disease, Jesse Gelsinger, died in a gene-therapy experiment.
But now, crucial puzzles have been solved and gene therapies are on the verge of curing devastating genetic disorders. Two gene therapies for inherited diseases—Strimvelis for a form of SCID and Glybera for a disorder that makes fat build up in the bloodstream—have won regulatory approval in Europe. In the United States, Spark Therapeutics could be the first to market; it has a treatment for a progressive form of blindness. Other gene therapies in development point to a cure for hemophilia and relief from an incapacitating skin disorder called epidermolysis bullosa.
Fixing rare diseases, impressive in its own right, could be just the start. Researchers are studying gene therapy in clinical trials for about 40 to 50 different diseases, says Maria-Grazia Roncarolo, a pediatrician and scientist at Stanford University who led early gene-therapy experiments in Italy that laid the foundation for Strimvelis. That’s up from just a few conditions 10 years ago. And in addition to treating disorders caused by malfunctions in single genes, researchers are looking to engineer these therapies for more common diseases, like Alzheimer’s, diabetes, heart failure, and cancer. Harvard geneticist George Church has said that someday, everyone may be able to take gene therapy to combat the effects of aging.
8. The Cell Atlas
Biology’s next mega-project will find out what we’re really made of.
Availability: 5 years
In 1665, Robert Hooke peered down his microscope at a piece of cork and discovered little boxes that reminded him of rooms in a monastery. Being the first scientist to describe cells, Hooke would be amazed by biology’s next mega-project: a scheme to individually capture and scrutinize millions of cells using the most powerful tools in modern genomics and cell biology.
The objective is to construct the first comprehensive “cell atlas,” or map of human cells, a technological marvel that should comprehensively reveal, for the first time, what human bodies are actually made of and provide scientists a sophisticated new model of biology that could speed the search for drugs.
To perform the task of cataloguing the 37.2 trillion cells of the human body, an international consortium of scientists from the U.S., U.K., Sweden, Israel, the Netherlands, and Japan is being assembled to assign each a molecular signature and also give each type a zip code in the three-dimensional space of our bodies.
“We will see some things that we expect, things we know to exist, but I’m sure there will be completely novel things,” says Mike Stubbington, head of the cell atlas team at the Sanger Institute in the U.K. “I think there will be surprises.”
Previous attempts at describing cells, from the hairy neurons that populate the brain and spinal cord to the glutinous fat cells of the skin, suggest there are about 300 variations in total. But the true figure is undoubtedly larger. Analyzing molecular differences between cells has already revealed, for example, two new types of retinal cells that escaped decades of investigation of the eye; a cell that forms the first line of defense against pathogens and makes up four in every 10,000 blood cells; and a newly spotted immune cell that uniquely produces a steroid that appears to suppress the immune response.
Three technologies are coming together to make this new type of mapping possible. The first is known as “cellular microfluidics.” Individual cells are separated, tagged with tiny beads, and manipulated in droplets of oil that are shunted like cars down the narrow, one-way streets of artificial capillaries etched into a tiny chip, so they can be corralled, cracked open, and studied one by one.
The second is the ability to identify the genes active in single cells by decoding them in superfast and efficient sequencing machines at a cost of just a few cents per cell. One scientist can now process 10,000 cells in a single day.
The third technology uses novel labeling and staining techniques that can locate each type of cell—on the basis of its gene activity—at a specific zip code in a human organ or tissue.
Behind the cell atlas are big-science powerhouses including Britain’s Sanger Institute, the Broad Institute of MIT and Harvard, and a new “Biohub” in California funded by Facebook CEO Mark Zuckerberg. In September Zuckerberg and his wife, Priscilla Chan, made the cell atlas the inaugural target of a $3 billion donation to medical research.
9. Botnets of Things
The relentless push to add connectivity to home gadgets is creating dangerous side effects that figure to get even worse.
Availability: Now
Botnets have existed for at least a decade. As early as 2000, hackers were breaking into computers over the Internet and controlling them en masse from centralized systems. Among other things, the hackers used the combined computing power of these botnets to launch distributed denial-of-service attacks, which flood websites with traffic to take them down.
But now the problem is getting worse, thanks to a flood of cheap webcams, digital video recorders, and other gadgets in the “Internet of things.” Because these devices typically have little or no security, hackers can take them over with little effort. And that makes it easier than ever to build huge botnets that take down much more than one site at a time.
In October, a botnet made up of 100,000 compromised gadgets knocked an Internet infrastructure provider partially offline. Taking down that provider, Dyn, resulted in a cascade of effects that ultimately caused a long list of high-profile websites, including Twitter and Netflix, to temporarily disappear from the Internet. More attacks are sure to follow: the botnet that attacked Dyn was created with publicly available malware called Mirai that largely automates the process of coopting computers.
The best defense would be for everything online to run only secure software, so botnets couldn’t be created in the first place. This isn’t going to happen anytime soon. Internet of things devices are not designed with security in mind and often have no way of being patched. The things that have become part of Mirai botnets, for example, will be vulnerable until their owners throw them away. Botnets will get larger and more powerful simply because the number of vulnerable devices will go up by orders of magnitude over the next few years.
10. Reinforcement Learning
By experimenting, computers are figuring out how to do things that no programmer could teach them.
Availability: 1 to 2 years
The approach, known as reinforcement learning, is largely how AlphaGo, a computer developed by a subsidiary of Alphabet called DeepMind, mastered the impossibly complex board game Go and beat one of the best human players in the world in a high-profile match last year. Now reinforcement learning may soon inject greater intelligence into much more than games. In addition to improving self-driving cars, the technology can get a robot to grasp objects it has never seen before, and it can figure out the optimal configuration for the equipment in a data center.
Reinforcement learning copies a very simple principle from nature. The psychologist Edward Thorndike documented it more than 100 years ago. Thorndike placed cats inside boxes from which they could escape only by pressing a lever. After a considerable amount of pacing around and meowing, the animals would eventually step on the lever by chance. After they learned to associate this behavior with the desired outcome, they eventually escaped with increasing speed.
Some of the very earliest artificial-intelligence researchers believed that this process might be usefully reproduced in machines. In 1951, Marvin Minsky, a student at Harvard who would become one of the founding fathers of AI as a professor at MIT, built a machine that used a simple form of reinforcement learning to mimic a rat learning to navigate a maze. Minsky’s Stochastic Neural Analogy Reinforcement Computer, or SNARC, consisted of dozens of tubes, motors, and clutches that simulated the behavior of 40 neurons and synapses. As a simulated rat made its way out of a virtual maze, the strength of some synaptic connections would increase, thereby reinforcing the underlying behavior.
There were few successes over the next few decades. In 1992, Gerald Tesauro, a researcher at IBM, demonstrated a program that used the technique to play backgammon. It became skilled enough to rival the best human players, a landmark achievement in AI. But reinforcement learning proved difficult to scale to more complex problems. “People thought it was a cool idea that didn’t really work,” says David Silver, a researcher at DeepMind in the U.K. and a leading proponent of reinforcement learning today.
That view changed dramatically in March 2016, however. That’s when AlphaGo, a program trained using reinforcement learning, destroyed one of the best Go players of all time, South Korea’s Lee Sedol. The feat was astonishing, because it is virtually impossible to build a good Go-playing program with conventional programming. Not only is the game extremely complex, but even accomplished Go players may struggle to say why certain moves are good or bad, so the principles of the game are difficult to write into code. Most AI researchers had expected that it would take a decade for a computer to play the game as well as an expert human.