Церква

This Robot Taught Itself to Walk in a Simulation—Then Went for a Stroll in Berkeley

artificial intelligence cassie robot machine learning walking

Recently, in a Berkeley lab, , a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.


And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.


It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.



This likely isn’t the first robot video you’ve seen, nor the most polished.


For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, , and . At times, it can seem the world of iRobot is just around the corner.


This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.


But they still have to of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.


In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.


Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that . Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.


In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.


Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.


To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.


Once the algorithm was good enough, it graduated to Cassie.


And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.


Other labs have been hard at work applying machine learning to robotics.


Last year Google used And . Boston Dynamics, too, will likely explore ways to . New approaches—like aimed at training multi-skilled robots or offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.


And in the meantime, Boston Dynamics bots are .


Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, , “This is one of the most successful examples I have seen.”


The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.


Image Credit:

[fixed][/fixed]