Working with our little mechanical buddies, part 2: Shall we dance?
March 11, 2015
In my first novel, “Red,” the heroine demonstrates programming her company’s new fancy-schmantcy robot to make a new movement. Red is supposed to teach the robot - it’s a mobile robot resembling a cross between a giant centipede and a lobster - how to waltz. I won’t go into the steps she uses to accomplish the task. If you want to know, buy the book and read it yourself. Suffice it to say, she’s successful, and actually teaches the thing to do a reasonable simulation of a waltz.
The question is: “How credible is the story?”
Within the constraints of the novel, it’s actually pretty credible. What makes it ring true is that when another female character asks to dance with the robot, Red warns her: “Just remember to let it lead. It won’t follow you a bit, so you’ll get tangled up.”
It turns out that there’s a big difference, for example, between handing something to a robot and placing the same object in its hand. The latter only requires the robot to place its “hand” in a receptive posture, and wait until some cue tells it the object is in place to be grasped. The former, however, requires the robot to predict where the human will place the object in space, and reach out to get it when the human is ready to relinquish the object. Too early, and the robot will find itself attempting to yank the object out of the human’s hand. Too late, and the object falls.
On May 20, researchers at Disney Research (that’s right-the Mouse guys!) won a Best Cognitive Robotics Paper award at the IEEE International Conference on Robotics and Automation in Karlsruhe, Germany, for a paper describing their research into how to make a robot accept something handed to it. Their strategy involves creating a motion-capture database of video clips showing two humans passing an object to each other.
They programmed a robot to observe how a human is moving to present an object, then compare those motions to how humans move in the video clips so it can predict where and when the human will be ready to relinquish the object. To allow their robot to pick the right video clip for comparison fast enough to interact with a human in real time, they created a hierarchical data structure using principal component analysis.
In their tests, by consulting their database a robot was able to recognize when a person initiated a handing motion, then refine its responses as the person followed through. In their tests, the robot was able to initiate its hand motion before the human reached the desired passing location, and stop at roughly the desired position to capture the handed object at the right time.
Way to go, guys!
C.G. Masi has been blogging about technology and society since 2006. In a career spanning more than a quarter century, he has written more than 400 articles for scholarly and technical journals, and six novels dealing with automation’s place in technically advanced society. For more information, visit www.cgmasi.com.
About the Author
You May Also Like