A team of researchers at University of California have developed a new system of algorithms that equips four-legged robots to run and walk on difficult surface, while avoiding both moving and static hurdles.
To test the system, it involved steering a robot to swiftly and autonomously maneuvre across sandy surfaces, grass, gravel, and bumpy dirt hills covered with fallen leaves and branches without bumping into trees, shrubs, poles, benches, boulders, or people. The robot was also operated in a busy office space without colliding into desks, chairs, or boxes.
Importantly, the development of the algorithm brings researchers a step closer to fabricate robots that can perform search and rescue operations, or collect information from places that are too dangerous or difficult for humans.
Using the system, a legged robot attains more versatility because it combines the robot’s sense of vision with another sensing capability called proprioception. This involves the robot’s sense of direction, speed, movement, location, and touch.
At present, most techniques used to train legged robots to navigate and walk are either based on proprioception or vision, not both, stated the senior author of the study.
The system that the researchers developed uses a special set of algorithms to combine data from sensors on the robot’s legs with real-time images captured by a depth camera mounted on the robot’s head. This was not simple to attain.
“During real-world operations, there involves a small delay in receiving images from the camera,” explained the lead researcher. This implies data from the two different sensing modalities is not received at the same time.
The algorithmic solution developed by the team involved simulating the mismatch by randomizing the two sets of inputs – called multi-modal delay randomization by researchers.