[caption id="attachment_13757" align="aligncenter" width="484"]
Cornell's robot has to learn how to move a knife without scaring customers[/caption] Before robots or any automation system can be useful enough to save time or effort for humans, they have to learn how to do their jobs... then learn how to do their jobs without killing or terrifying the humans they serve. Computer-driven, entirely automated robotic arms are able to handle welding, riveting, painting and a host of other complex tasks in manufacturing plants, so getting one to move food and utensils down the checkout counter of a grocery store shouldn't be difficult, but can still be terrifying to customers, according to a new report from researchers at Cornell University and illustrated with this video. Not all humans have the same idea of how far their personal space extends from their bodies, but nearly all react strongly when a robot waves a knife blade close to them, according to a team of Cornell researchers who have created a programming framework designed to teach robots how to move things along trajectories that don't frighten or endanger humans. The team programmed a Baxter robotic arm from Rethink Robotics in Boston to move groceries and other projects along a mock grocery-store checkout line staffed by lab humans who can correct any mistakes by moving the robot's arms in a way that accomplishes the same task more effectively. The shortest distance from the beginning of the checkout counter to the bag at the end is a straight line, but that trajectory is a problem when customers stand close enough to the counter that the knife blade slashes through their personal space on the way, according to the paper, an early version of which is posted here. (PDF) The Cornell team built into the task-instructional framework the ability to "coactively learn" from humans who grab the robot arm in mid-slash and guide it to a more acceptable distance. Relying on human corrections keeps the robot from having to know every aspect of every task before it starts, or force developers to reprogram it every time it squashes a tomato, drops a can through a plastic bag, or inadvertently threatens a customer. "We give the robot a lot of flexibility in learning," according to Ashutosh Saxena, assistant professor of computer science at Cornell, in a published statement. "The robot can learn from corrective human feedback in order to plan its actions that are suitable to the environment and the objects present." The robot doesn't repeat the first correction; instead it takes a series of corrections into account to calculate a better way to accomplish its goal. It takes approximately five corrections from a human to get the robot to learn its lesson, Saxena said.
