Robots could soon play maid and butler in homes, with a droid now programmed to scan a messy room, identify all items, figure out where they belong and put them back in place.
Such robots also could help pack warehouses and clean up auto repair shops, researchers say.
Previously scientists had developed robots that can grasp objects, but when it came to putting them back down again, the machines could place only single items down on flat surfaces. Now researchers are developing machines that can survey a group of things and place them in complex 3D spaces.
The robot, which has a single mechanical arm, surveys objects in rooms by using a Microsoft Kinect camera, which is equipped with an infrared scanner to help create 3D models of items. The Kinect was originally developed for video gaming but is being widely used by roboticists to help robots navigate rooms.
The droid weaves together many images to create an overall picture of a room. It then divides this view into blocks depending on their color and shape. The machine then computes how likely any block it sees is a given object. It then decides on an appropriate home for the item, creates a 3D model of the target space, and puts the object in that place, taking into account the shapes of both the item and the space for a stable placement.
(Before the exercise, the robot is shown examples of various kinds of items, such as books, to learn what characteristics they might have in common. The droid is also shown some examples of where to place objects beforehand, and from it learns where similar objects might or might not go, such as knowing not to put shoes in the refrigerator.)
The researchers' robot tidied up dishes, books, egg cartons, toys, clothing and other items — 98 objects in all — by placing them in 40 areas, such as bookshelves, dish racks, refrigerators, closets and on tables.
The robot proved up to 98 percent successful in recognizing and correctly putting away objects it had seen before.
"How can you possibly imagine that if a robot has neither seen a martini glass nor the stemware holder before, it would be able to put it away?" said researcher Ashutosh Saxena, a roboticist at Cornell University. "We show that it puts it away successfully — a hard task to do."
"It learned the common-sense physics principles of stability," Saxena told InnovationNewsDaily. "Learning these underlying principles from data allowed it to handle and adapt to new situations."
The robot was also capable of placing objects it had never seen before, but success rates fell to an average of 82 percent. Objects that were most often misidentified had ambiguous shapes — for instance, clothing and shoes. In addition, "perceiving whether a beer bottle is full or empty is hard, and therefore it has never quite figured out what to do with beer bottles — it just throws all of them into the recycling bin, empty or full, for now," Saxena said.
The world already has vacuum cleaner robots, with more than 8 million Roombas sold, and "very soon, I think two to four years, we'll see more capable robots — for example, a 2-foot-tall robot with a small arm that not only vacuums the floor, but also picks up and places things on the side," Saxena said. He noted his team will soon have such mobile robots that they can program with their algorithms.
Still, "this work is only a first step towards a cleaning and house-arranging robot," Saxena said. "A lot needs to be done before this robot could be useful. Would you be happy if it breaks one out of five glasses? No. What about one in 50? Maybe. Breaking only one in 5,000 would be really awesome. However, it takes a lot to go from 1 in 50, where we are now, to breaking only 1 in 5,000."
The researchers hope to improve the robot with higher-resolution cameras. Tactile sensors in the droid's hand also could help it know whether an object is in a stable position and can be released.
The machine also could be programmed to understand the preferences in which objects should belong — for instance, the TV remote control ideally would go next to the sofa in front of the TV.
Saxena and his colleagues detailed their findings online in the May issue of the International Journal of Robotics.