Lisa McTigue Pierce, Executive Editor

March 11, 2015

13 Min Read
Seeing robots in a new light

 

300268-Robotics_at_WestPack.JPG

Robotics at WestPack


 

Robots aren't new to packaging production plants. They have been pulling their weight for decades, for bulky material handling jobs such as palletizing, as well as for finer pick-and-place tasks such as tray packing.


But more and more facilities in North America now rely on non-human "workers" to assist in their packaging operations, either in supporting or starring roles.


Automation companies and packaging machinery manufacturers continue to advance the capabilities of their systems by incorporating new technologies. Attendees to the upcoming WestPack—and its co-located manufacturing-related events such as ATX and Pacific Design & Manufacturing—will have plenty of opportunity to see these available solutions for themselves.


But let's also take a peek at the future—or, at least, at very real possibilities for the future—from automation and robotics expert C.G. Masi. For more than two decades, Masi has written thought-provoking articles about automation's place in technically advanced society for scholarly and technical journals, including www.PackagingDigest.com.
— Lisa McTigue Pierce, Executive Editor

 

Why cheap labor isn't
Some time ago I introduced the Three Ds of Robotics: Dull, Dirty and Dangerous. If any manual task exhibits any one of these three characteristics, it's a candidate for automation. Soon thereafter I added a contra-indicator: Fun. If it's fun for humans to do, you shouldn't automate it.


Another characteristic has hit the media recently that enters into the debate for whether you should look to automate a given task. I'm not sure there's a one-word indicator for it, so I'll just explain.


Offshoring is the business strategy of setting up production facilities to take advantage of cheap labor in less-developed countries. As John Fluke, Jr. explained to me a couple of decades ago, offshoring directly competes with the strategy of automating production in a domestic facility.


Both strategies attempt to lower per-unit production cost by investing up-front capital costs to lower long-term variable costs. In the case of automation, the up-front cost is for developing and installing the automated equipment in a domestic facility. In the case of offshoring, the up-front cost is setting up an overseas facility.


At the time John, Jr. explained all this to me (around 1990), he claimed that the supply of highly skilled labor needed to set up and maintain automated equipment was far more available in the U.S., which obviated for onshoring production previously done elsewhere. He pointed this out as the reason his company, Fluke Instruments, had begun onshoring its production. That advantage has been eroded in some offshore locations, such as Malaysia and Korea, as those economies have built up their local talent pool.


Companies opting for offshoring have another strategy available: overseas contract manufacturing, which avoids the up-front part of the offshoring financial picture, but leaves the business at the mercy of the contract manufacturer's business decisions.


Recently, it has become clear that low variable costs drive businesses to look at capital expenses for further cost-cutting activities. As the garment industry has recently discovered via spectacular industrial accidents in Bangladesh and elsewhere, skimping on capital expenses leads to sub-standard facilities—and trouble in the long run.


Another problem with the offshoring strategy, which was pointed out in a report in Control Engineering, is that the inordinately low labor costs attract demand for labor, which causes labor costs to rise, as seems to be happening in China now. That wrecks your smokin'-hot low-labor-cost offshoring strategy. All the resources you spent finding a cheap-labor location to offshore is so much money down the drain as what used to be cheap-labor becomes expensive, and transportation charges pile up.


The point I'm trying to make is that investing in automation is a more sustainable strategy than trying to chase the lowest labor rate.

 

Look and see
The naive view of how humans "see" what they look at assumes that what goes into our eyes, which is basically a map of hue and value areas captured from the outside world on our retinas, gets dumped into our brains pretty much as is. Philosophers once imagined a situation like the character in the first Men In Black movie, which had a "little guy in the big guy's head." That, of course, sets up an endless cycle of guys within guys that quickly fails via a reductio ad absurdum argument transparent to even the most stone headed of stone-age thinkers.


That ain't what goes on. The hue and value map that reaches our retinas undergoes massive image processing before what's left of the image ever gets out of the eyeball. What goes down the optic nerve is information extracted from the retinal image that our brains can use to identify objects, and fix their positions in three-space.


Look is an intransitive verb meaning to open our eyes, and aim them in a certain direction. See is a transitive verb meaning to identify the thing we're looking at.


Big difference!


Current research in machine vision aims at bridging the gap between looking and seeing for automated systems. Current systems appear to misidentify objects at a rate of 30 percent to 40 percent false positives. That means the systems don't just get stumped, they actually come up with a wrong answer. 


That ain't good.


Human perceptions err, too. But, much less often and always with a conservative bias. We've all had the experience of finding human faces emerging from chaotic patterns, such as the grain in a piece of wood. That's caused by our built-in bias to hypothesize people in any as-yet-unidentified visual scene. It's the first step in how babies find their mothers!


Current systems go whole HOG (Haw! Haw!) with something called a Histogram of Oriented Gradients. These systems construct a database from measurements of strength and orientation of gradients in small patches pulled out of images, and compare them with similar HOGs of previously analyzed images. So far, so good, except that the systems get it badly wrong one third of the time.


To figure out why, researchers at the Massachusetts Institute of Technology (MIT) turned the contents of HOG databases into images, themselves, and presented these images to human analysts for interpretation. The analysts not only had a similar success rate, but the mistakes they made were similar to the mistakes HOG-based object-recognition systems make.


A-hah!


The researchers hope that their technique will help machine-vision developers figure out what their machine-vision systems are doing wrong-and find a way to do it right.

 

Soft and cuddly robots
I started getting interested in soft mechanical actuators—artificial muscles—about 45 years ago. At that time, I hoped the technology could be used to improve then-clunky artificial hearts, and looked at the possibility of using the contraction of miniature magnetic solenoids to mimic muscle fibers.


That didn't work out too well, and since then I've tried to keep abreast of what others thought up to replace rigid, heavy, clumsy electric motors for mechanical actuation. So far, nothing seems to really do the job. The most promising has been arrays of soft polymer cells bloated by hydrostatic pressure. But, that's still too slow and cumbersome. I looked at that solution around the turn of the millennium as a means of controlling aircraft wing shapes.


The latest impetus for developing soft mechanical actuators is to power the mechanical systems in robots. There are a couple of projects seeking to build soft-bodied robots that have come to my attention recently.


One is the Octopus project funded by the European Commission with participation by academic researchers in Europe and Israel. Its aim is to build a soft robot that mimics the capabilities and actions of a real octopus. Presumably, a robot octopus will be more amenable to human direction than Nature's version, and can be induced to perform meaningful tasks.


The little critter is powered by shape memory alloy (SMA) drivers and steel cables, and isn't quite as soft as it's cracked up to be. Also, since SMA motion is predicated on temperature changes, I have trouble seeing it as fast acting, reliable and adaptable. Octopus is, however, a step in the right direction.


The second project is funded by the Defense Applied Research Projects Agency (DARPA) of the U.S. Department of Defense, and really deserves the epithet "soft." This ugly sucker, built by G.W. Whiteside at Harvard University, is made entirely of silicone. It consists of a number of closed cells that can be expanded by compressed air to become rigid and power movement. It's sorta like the technology I was fiddling with 10 years ago, but they made it work.


The problem with compressed air technology (more formally called "pneumatics") is that it's complicated, noisy, bulky and relatively slow compared to the electric motors usually driving non-soft robots.


Frankly, I like this pneumatically powered beastie better than the Octopus. I can see its being the basis of a whole host of mechanically powerful robots adapted for different tasks. But, it ain't there, yet.

 

The eye is quicker than the hand
Eye-hand coordination is probably the most important and least appreciated facility "owned" by humans. For robots it's a capability that is absolutely crucial.


For example, a task that I, in my capacity as artist, do several times a day is cleaning an airbrush. It's the sort of task that robots should be able to do easily, but most of them can't.


A double-acting airbrush combines a plain-old venturi atomizer with a needle valve. Just like the automotive carburetor Wilhelm Maybach and Gottlieb Daimler developed in 1885, the airbrush uses a venturi to accelerate air—in this case driven by an external air compressor—past a tiny orifice that communicates with a source of a low-viscosity liquid. The liquid in this case being paint thinned to the consistency of ink. A several-inches-long needle partially blocks the orifice. The artist controls the rate at which paint exits the orifice by moving a lever that moves the needle back out of the orifice, allowing progressively more paint into the air stream.


The most critical airbrush-maintenance task is removing paint that builds up in the orifice. I do this by dismantling the instrument, and using the needle to rub away any paint coating the inside of the tiny orifice. This has to be done in a bath of solvent appropriate to the paint.


What has this to do with eye-hand coordination? The airbrush tip is a small fitting machined to include the venturi surrounding a channel leading to the orifice. It's about a quarter-inch in diameter, and about a half inch long, with holes drilled in it that are thousandths of an inch in diameter. To clean it, I have to insert the needle through a hole a couple of millimeters in diameter, which narrows to the orifice, which is less than a millimeter across. Remember that the needle is several inches long. So, I have to hold the fitting in one hand, then carefully line up the needle's point so it will enter the hole, then carefully maneuver it through the decreasing diameter tunnel until it emerges through the orifice. 


To make things worse, there's a step in that tunnel, so I have to feel around with the needle's point to find the orifice. Of course, it's a stainless steel needle with a sharp point that would be ruined if I pressed it too hard into the metal wall inside the fitting.


So, I need to use eye-hand coordination to line up the tip initially, then exquisite touch control to blindly maneuver the tip into the orifice.


There are myriad similar tasks that we would like our assembly robots to perform in factories every day. It requires stereoscopic vision, as well as two types of touch sensors, all of which are under development in robotics laboratories today.


I mentioned two types of touch sensors, and this might require a little explanation. Roboticists are all familiar with the kinesthetic sense that allows a robot to sense the resistance its mechanical actuators encounter while making a motion.
This is a relatively gross sense that even automotive power window systems have. To prevent the window from cutting off your daughter's arm when the window closes on it, a current sensor in the motor circuit senses the increased current caused by the unexpected resistance, and shuts down the motor.


The other, and much more subtle, touch sense is a means for sensing pressure as a function of position on a fingertip. It's a sense robots haven't had before, but which engineers at the University of California-Berkeley are making possible. They've used flex-circuit technology to create an array of pressure sensors printed on a flexible polymer substrate that promises to make robot touch sense possible.


Vulcan mind meld
Characters in my Red McKenna series of novels have a special rapport with their robotic servants through a natural-language communication system. It allows them the same kind of person-to-person interaction we enjoy with other people, pets and heads talking silliness on television. It's a closer rapport than programmers have traditionally had with computers through third-and-lower-generation software. Even fourth-generation software, which allows two-way dialogs between people and machines, leaves something to be desired-as my wife's constant battles with automated telephone bill-paying systems demonstrate almost daily.


I don't have the problem because I still refuse to "talk" to an automated machine until they improve their communications skills. If I can't use the keypad, I'll just hang up.


Recently, there's been some buzz about human brain-to-brain interfacing over the internet. Researcher Rajesh Rao sent a brain signal over the internet to co-conspirator Andrea Stocco on the other side of the University of Washington campus, causing Stocco's finger to move on a keyboard. This was all good clean fun and a dandy proof of concept for technology that's been speculated about since God knows when.


Of course, controlling somebody else's body by any form of telepath—even one requiring a whole pile of equipment in the link—raises all kinds of ethical questions that don't come up when the receiving brain is inside a robot, which is the next logical step. The demonstration, however, brings us to the next philosophical step, which I advocate taking early in the development of any technology: testing it against Scheiber's Law.


Just because you can, doesn't mean you should!


Finally making this technology work posits all kinds of bizarre gadgets from the pages of fantasy and science fiction literature. They've all been great fun to speculate about in the abstract, but the prospect of making them real should give us very definite pause. Maybe a full stop!


The biggest problem that I see is that at least 80 percent (and probably much more) of everything that goes on between any given person's ears should never see the light of day, let alone be immediately broadcast or acted upon.


We've all seen the motor mouth syndrome where too much excitement in a social environment, or even just too much coffee, leads a person to allow every thought that crosses their brain to dribble out of their mouth. The result is a plethora of badly edited utterances that demonstrate how silly most of what we think is. 


What I like most about writing is that I can give free reign to this syndrome, then edit the resulting product into something that is at least not embarrassing before letting anyone else know about it. Our natural communications systems have a built-in delay that allows time for cleaning up our thoughts before making them public.


 

Imagine, however, if we shortcut that delay by tapping directly into our brains electronically. Douglas Adams was right when he characterized telepathy as the worst of all social diseases!

 

 

 

FIND A SOLUTION AT WESTPACK:
Click for a list of WestPack 2014 exhibitors showing:
Automation solutions
Robotics solutions

 

.

About the Author(s)

Lisa McTigue Pierce

Executive Editor, Packaging Digest

Lisa McTigue Pierce is Executive Editor of Packaging Digest. She’s been a packaging media journalist since 1982 and tracks emerging trends, new technologies, and best practices across a spectrum of markets for the publication’s global community. Reach her at [email protected] or 630-272-1774.

Sign up for the Packaging Digest News & Insights newsletter.

You May Also Like