Someday, when quakes, fires, and floods strike, the first responders might be packs of robotic rescue dogs rushing in to help stranded souls. These battery-powered quadrupeds would use computer vision to size up obstacles and employ doglike agility skills to get past them.
Toward that noble goal, AI researchers at Stanford University and Shanghai Qi Zhi Institute say they have developed a new vision-based algorithm that helps robodogs scale high objects, leap across gaps, crawl under thresholds, and squeeze through crevices – and then bolt to the next challenge. The algorithm represents the brains of the robodog.
“The autonomy and range of complex skills that our quadruped robot learned is quite impressive,” said Chelsea Finn, assistant professor of computer science and senior author of a new peer-reviewed paper announcing the teams’ approach to the world, which will be presented at the upcoming Conference on Robot Learning. “And we have created it using low-cost, off-the-shelf robots – actually, two different off-the-shelf robots.”