One of the key goals for robotics developers is to create machines with similar sensory perception as humans, particularly for the sense of touch. This can help make them safer when working alongside humans as well as help them better sort objects in a factory setting without damaging them, among other benefits. A team of engineers has advanced this goal with the development of a robotic hand that can rotate objects solely through touch, without the need for robotic vision. The hand—developed by researchers from Hong Kong University of Science and Technology (HKUST) and University of California San Diego (UCSD)—used only information based on touch to smoothly rotate a wide array of objects, from small toys, cans, and even fruits and vegetables, without injuring or crushing them, they reported.
“In-hand manipulation is a very common skill that we humans have, but it is very complex for robots to master,” said Xiaolong Wang, a professor of electrical and computer engineering at UCSD who led the research, in a post on UC San Diego Today. “If we can give robots this skill, that will open the door to the kinds of tasks they can perform.”
What's more, researchers developed the hand in a low-budget way, relying on a number of low-cost, low-resolution sensors spread over the hand that use simple, binary signals—which determine "touch" or "no touch"—to perform the ability to rotate objects.
Keeping the Robotic Hand Simple
Indeed, the entire approach that researchers took was simpler than most other approaches, which use only a few high-cost, high-resolution touch sensors that are attached to a small area of the robotic hand, primarily at the fingertips. The sensors developed by Wang's team cover a large surface area, with 16 of the "touch or no touch" sensors costing about $12 each attached to the palm and four fingers of the robotic hand.
This approach overcame some of the limitations of strategies that use fewer but bigger sensors, Wang said. One is that using just a small number of sensors on the robotic hand minimizes the surface area that they can cover to sense an object, he said.
Secondly, high-resolution touch sensors that provide information about texture are difficult to simulate as well as highly expensive, which makes it difficult to test the hand's performance in a lab. Further, most of these solutions still rely on robotic vision to sense the object, not the sensor system alone, Wang said.
The simple solution that the UCSD/HKUST team developed demonstrated that "we don’t need details about an object’s texture to do this task," he said. "We just need simple binary signals of whether the sensors have touched the object or not, and these are much easier to simulate and transfer to the real world.”
Further, covering a large area of the hand with binary touch sensors provides enough info about an object’s 3D structure and orientation that the hand can successfully rotate it without vision, he said.
Training and Demonstration of the Robotic Hand
The researchers trained their system by running simulations of a virtual robotic hand rotating a diverse set of objects, including ones with irregular shapes. They then tested their system on the robotic hand they designed using objects that the system hadn't yet encountered.
The robotic hand demonstrated an ability to rotate a variety of objects—including a tomato, pepper, a can of peanut butter and a toy rubber duck--without stalling or losing its hold. The duck was the most challenging object due to its shape, with irregular shapes taking longer to rotate, the researchers said. The hand also could rotate objects around different axes.
The researchers presented a paper on the robotic hand at the 2023 Robotics: Science and Systems Conference. They are now working on extending the capabilities of the robotic hand to more complex manipulation tasks, such as catching, throwing, or even juggling objects, they said.