Robots opening doors have become a popular sight in videos, but what many don’t realize is that most of these demonstrations involve significant human assistance. However, a new study from ETH Zurich introduces a model that requires minimal manual guidance, offering a promising step towards fully autonomous robots. The system follows a three-step process: scene and action description by the user, planning a route, and refining it into a minimal viable path. This breakthrough could pave the way for robots that can open doors without any human intervention.
The research paper explains that by providing high-level descriptions of the robot and object, along with a sparse objective, the planner can determine how the robot should move, what forces it should exert, which limbs to use, and when and where it should establish or break contact with the object. The system is divided into two categories: object-centric, which involves tasks like opening doors or dishwashers, and robot-centric, which focuses on moving the robot around objects. The team behind this research conducted their demos on ANYbotics’ quadruped robot, ANYmal, but emphasizes that the system can be adapted for different form factors. With the ultimate goal of developing a fully autonomous loco-manipulation pipeline, this research brings us one step closer to robots that can independently open doors.
ETH Zurich Develops a New Model for Robot Guidance
You may have seen videos of robots effortlessly opening doors and walking through them. However, what you may not know is that most of these videos involved a significant amount of human intervention. Researchers from ETH Zurich have now developed a new model that requires minimal manual guidance, bringing us a step closer to fully autonomous robots.
A Three-Step Process
The new model developed by ETH Zurich follows a three-step process. First, the user describes the scene and action to the robot. Second, the system plans a route, which may be somewhat convoluted. Finally, the system refines the planned route into a minimal viable path.
According to the research paper, the system uses high-level descriptions of the robot and object, along with a sparse objective, to determine how the robot should move, what forces it should exert, what limbs it should use, and when and where it should establish or break contact with the object. This holistic approach allows the robot to perform tasks with minimal human intervention.
Object-Centric and Robot-Centric Tasks
The system developed by ETH Zurich is divided into two main categories: object-centric and robot-centric. Object-centric tasks involve actions like opening a door or a dishwasher, while robot-centric tasks focus on moving the robot around objects.
Demonstrations on ANYbotics’ ANYmal
For simplicity, the researchers chose to demonstrate the system using ANYbotics’ ANYmal, a quadruped robot. However, they emphasize that the system can be adapted for different form factors.
Towards Fully Autonomous Loco-Manipulation
The ETH Zurich team sees this research as a stepping stone towards developing a fully autonomous loco-manipulation pipeline. By reducing the need for human intervention, robots will be able to perform tasks independently, such as opening doors, without any human assistance.
In conclusion, ETH Zurich’s new model for robot guidance brings us closer to a future where robots can perform tasks with minimal human intervention. With its three-step process and holistic discovery approach, this system has the potential to pave the way for fully autonomous robots.