Last week at RoboBusiness, I was fortunate to hear James Kuffner, CTO of Toyota’s Research Institute (TRI), present his vision of a million connected robots. If this premise sounds scary, I am sorry. Halloween is just around the corner. Seriously, it is worth digging into in greater detail.
According to Kuffner, connectivity speeds are as important to cloud robotics as the growth of processing speeds or Moore’s Law. “An unknown that some people are not aware of is how rapidly wireless broadband internet speeds and bandwidth have improved, almost 1600x in terms of both speed and bandwidth over the last 10 years,” Kuffner declared on stage. “It is really been a game-changer for connected cars and connected robots, and so one of the things I have been thinking about is what changes and what is possible when you have these high-speed wireless connections to the most powerful computational resources that humans have created, which is the modern data center.”
Kuffner first coined the “cloud robotics” concept in 2010 when he was at Google, as a shift on the thinking of robot development. Utilizing the cloud, future robots will be able to quickly learn across a network of connected autonomous systems by accessing learning gathered from individual endpoints that is absorbed into the whole.
“Instead of having one robot run for 10,000 hours, why don’t 100 robots run for 100 hours and gather the same amount of data,” Kuffner said. “Data sharing robots, each learning from each other’s experiences will help everyone improve at a faster rate.” This applies equally to connected cars, as well as connected in-home or personal assistance robots sharing their data, he added.
Kuffner was hired away from Google in January to lead Toyota’s research endeavors into the future, with the first problem to tackle — aging in place. In his own words, “when we think about robotics, we think about aging in place, and quality of life. On the technical side, we have a lot of challenges in terms of reliable perception, reasoning and scene understanding in order to realize a true transportation solution, and an intelligent robot that can help people age in place.”
How does this relate to cars? Kuffner explains, “Transportation has always been about freedom and mobility, and people aging and losing their ability to drive means that they have less freedom and less mobility… So one of the good outcomes of having autonomy for vehicles is people can suddenly recover freedom of mobility for people who otherwise cannot drive.”
Back in Mountain View, Kuffner’s former employer (Google) has made great progress on its robotic cloud under its new leader Sergey Levine of the Google Brain team, which announced this past Monday that it completed a series of three tests that proved how one can utilize the cloud for “general-purpose skill learning across multiple robots.”
The first test involved robots learning motor skills directly from trial-and-error practice. Each robot started with a copy of a neural net as it attempted to open a door over and over. At regular intervals, the robots sent data about their performances to a central server, which used the data to build a new neural network that better captured how action and success were related. The server then sent the updated neural netback to the robots.
In the second scenario, the researchers wanted robots to learn how to interact with objects not only through trial-and-error but also by creating internal models of the objects, the environment, and their behaviors. Just as with the door opening task, each robot started with its own copy of a neural network as it “played” with a variety of household objects. The robots then shared their experiences with each other and together built what the researchers describe as a “single predictive model” that gives them an implicit understanding of the physics involved in interacting with the objects.
The final trial involved robots learning skills with help from humans. The idea is that people have a lot of intuition about their interactions with objects and the world, and that by assisting robots with manipulation skills we could transfer some of this intuition to robots to let them learn those skills faster. In the experiment, a researcher helped a group of robots open different doors while a single neural network on a central server encoded their experiences. Next, the robots performed a series of trial-and-error repetitions that were gradually more difficult, helping to improve the network.
According to the Brain Team’s blog post, all 3 of the experiments proved the robots’ ability to communicate and exchange their experiences enabling them to learn more quickly and effectively. This becomes particularly important when we combine robotic learning with deep learning, as is the case in all of the experiments discussed above. We have seen before that deep learning works best when provided with ample training data. For example, the popular ImageNet benchmark uses over 1.5 million labeled examples. While such a quantity of data is not impossible for a single robot to gather over a few years, it is much more efficient to gather the same volume of experience from multiple robots over the course of a few weeks. Besides faster learning times, this approach might benefit from the greater diversity of experience: a real-world deployment might involve multiple robots in different places and different settings, sharing heterogeneous, varied experiences to build a single, highly generalizable representation.
According to the researchers, “given that this updated network is a bit better at estimating the true value of actions in the world, the robots will produce better behavior … This cycle can then be repeated to continue improving on the task.”
In other SkyNet news, TRI and Google have inspired startups to pursue the robotic cloud as well. Just last month, Rapyuta Robotics, an ETH Zurich spin-off, received $10 million in Series A funding from Japanese-based SBI investments Co. According to the website, Rapyuta Robotics’ mission is to empower lives with cloud-connected mobile autonomous machines. An open-source version of its robotic cloud platform is expected to be released next year. I suppose someone should alert John Connor …
Image credit: CC by Greg Heartsfield