Eye-hand coordination is probably the most important and least appreciated facility "owned" by humans. For robots it's a capability that is absolutely crucial.
For example, a task that I, in my capacity as artist, do several times a day is cleaning an airbrush. It's the sort of task that robots should be able to do easily, but most of them can't.
A double-acting airbrush combines a plain-old venturi atomizer with a needle valve. Just like the automotive carburetor Wilhelm Maybach and Gottlieb Daimler developed in 1885, the airbrush uses a venturi to accelerate air—in this case driven by an external air compressor—past a tiny orifice that communicates with a source of a low-viscosity liquid. The liquid in this case being paint thinned to the consistency of ink. A several-inches-long needle partially blocks the orifice. The artist controls the rate at which paint exits the orifice by moving a lever that moves the needle back out of the orifice, allowing progressively more paint into the air stream.
The most critical airbrush-maintenance task is removing paint that builds up in the orifice. I do this by dismantling the instrument, and using the needle to rub away any paint coating the inside of the tiny orifice. This has to be done in a bath of solvent appropriate to the paint.
What has this to do with eye-hand coordination? The airbrush tip is a small fitting machined to include the venturi surrounding a channel leading to the orifice. It's about a quarter-inch in diameter, and about a half inch long, with holes drilled in it that are thosandths of an inch in diameter. To clean it, I have to insert the needle through a hole a couple of millimeters in diameter, which narrows to the orifice, which is less than a millimeter across. Remember that the needle is several inches long. So, I have to hold the fitting in one hand, then carefully line up the needle's point so it will enter the hole, then carefully maneuver it through the decreasing diameter tunnel until it emerges through the orifice.
To make things worse, there's a step in that tunnel, so I have to feel around with the needle's point to find the orifice. Of course, it's a stainless steel needle with a sharp point that would be ruined if I pressed it too hard into the metal wall inside the fitting.
So, I need to use eye-hand coordination to line up the tip initially, then exquisite touch control to blindly maneuver the tip into the orifice.
There are myriad similar tasks that we would like our assembly robots to perform in factories every day. It requires stereoscopic vision as well as two types of touch sensors, all of which are under development in robotics laboratories today.
I mentioned two types of touch sensors, and this might require a little explanation. Roboticists are all familiar with the kinesthetic sense that allows a robot to sense the resistance its mechanical actuators encounter while making a motion.
This is a relatively gross sense that even automotive power window systems have. To prevent the window from cutting off your daughter's arm when the window closes on it, a current sensor in the motor circuit senses the increased current caused by the unexpected resistance, and shuts down the motor.
The other, and much more subtle, touch sense is a means for sensing pressure as a function of position on a finger tip. It's a sense robots haven't had before, but which engineers at the University of California-Berkeley are making possible. They've used flex-circuit technology to create an array of pressure sensors printed on a flexible polymer substrate that promises to make robot touch sense possible.
C.G. Masi has been blogging about technology and society since 2006. In a career spanning more than a quarter century, he has written more than 400 articles for scholarly and technical journals, and six novels dealing with automation's place in technically advanced society. For more information, visit www.cgmasi.com.