The history of robotics is, of course, ultimately the history of Man’s (pardon the sexist reference, but that’s the Ugly Truth) quest for the ultimate golem. A golem, as I’ve explained before, is the ancient idea of an artificial person. Nobody really knows when the idea originated, but it seems to predate writing. It probably started with the first person who created a human figure – what we’d call a “doll” – way back in the stone age. Since then, identifiable golem stories have appeared in just about every period of every culture.
One characteristic seems to be common among all golems: they’re dumb as a box of rocks. Even those, like Azimov’s offerings, who sported superior mental abilities had a fatal flaw: something was always wrong with their brains. Accounts differ, but basically they lacked some mental ability – generally the storyteller’s description of a human “soul.” Humans had it, golems didn’t, and that’s how you could tell them apart.
Most often, the fatal flaw involved imagination, and the golem’s lack thereof. I might point out the same mental impairment appearing in developers of TV series, but that would be just mean.
Imagination, despite what poets and artists would have us believe, is most importantly involved in the ability to solve problems. In most golem stories, the teller sets up a problem that is child’s play for humans, but is simply beyond the golem’s ability to solve because it lacked imagination.
Recently, my friends at Control Engineering published a very interesting article entitled “Artificial intelligence tools can aid sensor systems” in which author David Sanders of the University of Portsmouth in Portsmouth, UK identified seven artificial intelligence (AI) tools that could help robots get over the imaginative-problem-solving difficulty.
The bad news is that all are, to some extent, useless. The good news is that all have different uselessness characteristics.
Let me clarify: all have situations where they are really, really good, but all have situations in which they give no or (what’s worse) wrong answers. Happily, it’s not too hard to tell whether a given tool will give a good answer in a given situation. Even better, they all have different areas of goodness. As a veteran journalist/scientist/whatever, I’m thrilled by the idea that more than one tool might work in any given situation. The obvious solution is to create a system that will first figure out what tools will work in our given situation, then apply them, and compare the answers. There are various strategies for picking a preferred solution in the likely event that different tools disagree.
Then, there’s always the “Hail Mary” option of just doing something (via a random number generator) under the theory that screwing up is better than doing nothing.
How many times have we, as human problem solvers, done that?