Our kids do make errors and stumple many times before they learn how to be perfect and researchers from the University of Columbia, Berkeley proved that even computers can do that once programmed to do so.
UC Berkeley Professor Pieter Abbeel has developed new algorithms to replicate the human trial and error method in robots which he said is major milestone in the field of “artificial intelligence.”
The new algorithm enables robots to learn motor skills with a process that more closely resembles the way humans learn, called reinforcement learning, by having a robot complete several tasks raning from putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, all without any pre-program but observing its surroundings.
Here is a video showing BRETT, a PR2 robot, successfully tested by scientists:
“The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it,” explains Abbeel, who plans to unveil his robot on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA).
Abbeel and his fellow faculty member Trevor Darrell and postdoctoral researcher Sergey Levine and Ph.D. student Chelsea Finn have worked on the project that will revolutionize the way robots are being programmed and made to undertake some specific operations.
The work is part of a new People and Robots Initiative at UC’s Center for Information Technology Research in the Interest of Society (CITRIS) that seeks to take up initiative to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.
Some past progress in the field was made in terms of deep learning programs which create “neural nets” in with layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels and Apple’s Siri on their iPhones, Google’s speech-to-text program or Google Street View are some of the best examples of “deep learning”.
The UC Berkeley researchers have worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks and tasked it with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks, including a reward function.
“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,” said Abbeel, hoping that in 5 to 10 years period from now, it would be possible.