FIVE YEARS AGO, the coders at DeepMind, a London-based artificial intelligence company, watched excitedly as an AI taught itself to play a classic arcade game. They’d used the hot technique of the day, deep learning, on a seemingly whimsical task: mastering Breakout, the Atari game in which you bounce a ball at a wall of bricks, trying to make each one vanish.
Deep learning is self-education for machines; you feed an AI huge amounts of data, and eventually it begins to discern patterns all by itself. In this case, the data was the activity on the screen—blocky pixels representing the bricks, the ball, and the player’s paddle. The DeepMind AI, a so-called neural network made up of layered algorithms, wasn’t programmed with any knowledge about how Breakoutworks, its rules, its goals, or even how to play it. The coders just let the neural net examine the results of each action, each bounce of the ball. Where would it lead?
To some very impressive skills, it turns out. During the first few games, the AI flailed around. But after playing a few hundred times, it had begun accurately bouncing the ball. By the 600th game, the neural net was using a more expert move employed by human Breakout players, chipping through an entire column of bricks and setting the ball bouncing merrily along the top of the wall.
“That was a big surprise for us,” Demis Hassabis, CEO of DeepMind, said at the time. “The strategy completely emerged from the underlying system.” The AI had shown itself capable of what seemed to be an unusually subtle piece of humanlike thinking, a grasping of the inherent concepts behind Breakout. Because neural nets loosely mirror the structure of the human brain, the theory was that they should mimic, in some respects, our own style of cognition. This moment seemed to serve as proof that the theory was right.
Then, last year, computer scientists at Vicarious, an AI firm in San Francisco, offered an interesting reality check. They took an AI like the one used by DeepMind and trained it on Breakout. It played great. But then they slightly tweaked the layout of the game. They lifted the paddle up higher in one iteration; in another, they added an unbreakable area in the center of the blocks.
A human player would be able to quickly adapt to these changes; the neural net couldn’t. The seemingly supersmart AI could play only the exact style of Breakout it had spent hundreds of games mastering. It couldn’t handle something new.
“We humans are not just pattern recognizers,” Dileep George, a computer scientist who cofounded Vicarious, tells me. “We’re also building models about the things we see. And these are causal models—we understand about cause and effect.” Humans engage in reasoning, making logical inferences about the world around us; we have a store of common-sense knowledge that helps us figure out new situations. When we see a game of Breakout that’s a little different from the one we just played, we realize it’s likely to have mostly the same rules and goals. The neural net, on the other hand, hadn’t understood anything about Breakout. All it could do was follow the pattern. When the pattern changed, it was helpless.
Deep learning is the reigning monarch of AI. In the six years since it exploded into the mainstream, it has become the dominant way to help machines sense and perceive the world around them. It powers Alexa’s speech recognition, Waymo’s self-driving cars, and Google’s on-the-fly translations. Uber is in some respects a giant optimization problem, using machine learning to figure out where riders will need cars. Baidu, the Chinese tech giant, has more than 2,000 engineers cranking away on neural net AI. For years, it seemed as though deep learning would only keep getting better, leading inexorably to a machine with the fluid, supple intelligence of a person.
But some heretics argue that deep learning is hitting a wall. They say that, on its own, it’ll never produce generalized intelligence, because truly humanlike intelligence isn’t just pattern recognition. We need to start figuring out how to imbue AI with everyday common sense, the stuff of human smarts. If we don’t, they warn, we’ll keep bumping up against the limits of deep learning, like visual-recognition systems that can be easily fooled by changing a few inputs, making a deep-learning model think a turtle is a gun. But if we succeed, they say, we’ll witness an explosion of safer, more useful devices—health care robots that navigate a cluttered home, fraud detection systems that don’t trip on false positives, medical breakthroughs powered by machines that ponder cause and effect in disease.
But what does true reasoning look like in a machine? And if deep learning can’t get us there, what can?
Read the full article on wired.com
Clive Thompson (@pomeranian99) is a columnist for WIRED.