Artificial intelligence is not as smart as you (or Elon Musk) think

In March 2016, DeepMind’s AlphaGo beat Lee Sedol, who at the time was the best human Go player in the world. It represented one of those defining technological moments not unlike IBM’s Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the world’s greatest Jeopardy champions in 2011.

Yet these victories, as mind-blowing as they seemed to be, were more about training algorithms and using brute-force computational strength than any real intelligence. Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isn’t intelligence, at least as we think about it with humans.

He explained that as strong as AlphaGo was at its given task, it actually couldn’t do anything else but play Go on a standard 19 x 19 board. He relayed a story that while speaking to the DeepMind team in London recently, he asked them what would have happened if they had changed the size of the board to 29 x 29, and the AlphaGo team admitted to him that had there been even a slight change to the size of the board, “we would have been dead.”

“I think people see how well [an algorithm] performs at one task and they think it can do all the things around that, and it can’t,” Brooks explained.

Brute-force intelligence

As Kasparov pointed out in an interview with Devin Coldewey at TechCrunch Disrupt in May, it’s one thing to design a computer to play chess at Grand Master level, but it’s another to call it intelligence in the pure sense. It’s simply throwing computer power at a problem and letting a machine do what it does best.

“In chess, machines dominate the game because of the brute force of calculation and they [could] crunch chess once the databases got big enough and hardware got fast enough and algorithms got smart enough, but there are still many things that humans understand. Machines don’t have understanding. They don’t recognize strategical patterns. Machines don’t have purpose,” Kasparov explained.

Gil Pratt, CEO at the Toyota Institute, a group inside Toyota working on artificial intelligence projects including household robots and autonomous cars, was interviewed at the TechCrunch Robotics Session, said that the fear we are hearing about from a wide range of people, including Elon Musk, who most recently called AI “an existential threat to humanity,” could stem from science-fiction dystopian descriptions of artificial intelligence run amok.

I think it’s important to keep in context how good these systems are, and actually how bad they are too, and how long we have to go until these systems actually pose that kind of a threat [that Elon Musk and others talk about]

— Gil Pratt, CEO, Toyota Institute


“The deep learning systems we have, which is what sort of spurred all this…

Read the full article from the Source…

Leave a Reply

Your email address will not be published. Required fields are marked *