The hidden side of politics

How to Get a Robot to (One Day) Do Your Chores

Reported by WIRED:

Perhaps the greatest outrage in modern robotics is the continued non-existence of the robot housekeeper. Is it really so much to ask for a robot that sweeps and mops and brings you pills on platters, like Rosie from The Jetsons?

Actually, it kind of is a lot to ask: A robot that can do even the simplest of chores (save for vacuuming), like setting a table, is a huge challenge because such tasks require both dexterity and planning. But the scientists at the MIT Computer Science and Artificial Intelligence Laboratory are working toward a world where robots make our coffee and set our tables. And that research is happening inside a simulation. Because if we want the machines to run our homes instead of leveling them, we have to train them up right.

You spend a good amount of your day on autopilot. For instance, I don’t imagine you put much reasoning into making a cup of coffee. You don’t think:

Open cabinet > grab coffee > close cabinet > put coffee down > open different cabinet > grab mug > close cabinet > turn on coffee maker …

You get the point. What comes so easy to you is in fact an extremely complex set of instructions for a theoretical robot. So these researchers created software versions of humanoid robots in a simulation, which could break each task into “atomic actions,” or small steps you have to take. “They could be switching on a television if you want to watch TV, or opening a fridge to grab milk to make coffee,” says MIT CSAIL computer scientist Xavier Puig, lead author on a new paper describing the system.

These atomic actions link together to produce what is essentially a molecule—a complex task. Describing small actions gives the humanoid “robots” in a simulation a common taxonomy to draw on. Using these, the robot executes chores, which the researchers have modeled as computer programs. So, as you can see in the video above, the output is a video of a robot working in a synthetic environment, approaching a TV and clicking it on and sitting down … kinda awkwardly.

After creating this system for chores, Puig and his colleagues can then run it in reverse. “We also show a model that takes a video in our synthetic environment and learns to reconstruct the program that generated this video,” says Puig. In other words, the system can recognize that a robot is performing a certain task, then recreate it.

The next step, of course, is to get the system to watch a video of a human performing a chore like setting the table and break it down into its component parts (the task, not the table itself). Down the road, when home robots do finally exist, you could theoretically upload this kind of knowledge into their brains, like Neo in The Matrix downloading kung fu lessons.

MIT CSAIL
MIT CSAIL

Or, alternatively, a robot right there in the room could observe its human owner do a task, then learn by example. This will be particularly useful when you consider that you might collaborate with a home robot to complete a chore, and it will have to adapt to your particular order of doing things. At what point do you add cream to your coffee? Do you even like cream in the first place? The robot will figure it out. “It could learn to anticipate future actions and be able to change the environment for the human,” says Puig. “So if it sees that they’re starting to grab the ground coffee, it could go to the fridge and bring milk.”

But that’s years and years away. The virtual agents in this simulation are working in a static environment—chairs and sofas and mugs arranged as they should be—but that’s not how the real home works. Kids run around, toy cars appear out of nowhere, chairs shift. So robots will have to keep training in a virtual world that’s more unpredictable before they enter the chaos of the home.

And that’s going to be a big leap. “The question remains of how to turn action programs into safe and intelligent behavior for a real robot in the real world,” says James Bergstra, co-founder and head of AI research at Kindred, which uses machine learning to teach robots how to manipulate objects. “But this work represents progress in understanding what people are saying to a robot in terms of what they’d like it to do.”

And even when an environment is relatively predictable, robots still struggle with manipulating objects. We live in a world built for human hands—door handles and TV remotes and such—yet no robot hand (formally known as an end effector) can come close to replicating the dexterity you enjoy. The machines will have to get a whole lot better at manipulation, because the margin for error here is virtually zero. A robot can’t grasp a coffee mug with 90 percent accuracy, or 95 or 96—it has to be 100 percent accuracy. An error rate of just 1 percent means one dropped mug out of 100, a small yet unacceptable figure if you want a robot you don’t end up strangling.

Rosie the Robot is a long, long ways off. And it’s not particularly likely that home robots will look like humans, in any case, given how much effort it takes to stand on two legs. But when robots finally do make our coffee and set our tables, their careful handling of our favorite mugs will have been born in a simulation.


More Great WIRED Stories

Source:WIRED

Share

FOLLOW @ NATIONAL HILL