Last time I told how, some 40 years or so ago, I figured out that ultimate automated-system development would bog down unless computers managed to grow bodies—become robots. Having a body would force a computer to develop a sense of self.
That sense of self comes from a robot's recognizing that there is a universe outside of itself. That is possible only when the robot has other senses to pull information from that outside world for it to analyze internally. If the robot can't get real-time data about the outside, how's it supposed to sense the existence of that outside? Without understanding the existence of outside, how's it supposed to understand the existence of a non-outside—its "self?"
Humans use five senses to monitor the outside world: vision, hearing, smell, taste, and touch. Robots are likely to find additional senses useful, but it's hard to imagine them getting along without at least those five. All have, to a greater or lesser extent, been implemented in automated-system technology.
The most highly developed machine sense technology just might be what is also arguably the most difficult: vision. Machine vision systems have been around at least since the 1980s. What robot vision requires is a two-dimensional sensor array feeding images in real time to a computer with enough ooomph to process the images fast enough to extract useful information, and software sophisticated enough to pick the useful information out of a vast pile of extraneous noise.
Hearing should be simpler because the dimensionality of the information is lower. Since modern humans don't use hearing for much other than communication, we haven't put a lot of effort into developing appropriate auditory-sensor technology. So, I count machine hearing as relatively primitive. We have, however, already reached a level where we have machine hearing capable of recognizing the sounds of a human voice well enough to transcribe those broadband buzzy sounds into reasonably accurate text. Even this computer I'm typing on has the ability to understand spoken commands it senses through its built-in microphone.
Taste and smell are actually intimately related. They're both technologies to pick up chemical-composition information from the environment. The main difference is that taste involves liquid samples, and smell involves atmospheric gasses. Even humans mix these senses in practice: tasting your morning coffee involves smelling it at the same time. Probably the best way to implement machine smell is via chromatography. Miniaturized liquid chromatography equipment has been around for years.
Touch should, in principle, be even easier to machinify. What we need is a cheap, simple miniature device to measure mechanical pressure applied over a relatively small area of the robot's surface. Just a few days ago, folks at the Harvard School of Engineering and Applied Sciences announced a technology where they embed a miniaturized barometer in vacuum-sealed rubber to make it a tiny, ruggedized, super-sensitive pressure transducer. That's all you need to give a robot a sense of touch. With only one degree of freedom, software needed to extract useful information is nil.
Thus, for all practical purposes, we already have the technology to equip any robot with all five of the senses humans have used to conquer the world.
C.G. Masi has been blogging about technology and society since 2006. In a career spanning more than a quarter century, he has written more than 400 articles for scholarly and technical journals, and six novels dealing with automation's place in technically advanced society. For more information, visit www.cgmasi.com.