X
Innovation

Psychophysics and the bizarre science behind robots that know what we want

Researchers are teaching robots to read the subtle nonverbal clues that betray our intentions.
Written by Greg Nichols, Contributing Writer
coffee.jpg

If you have a physical disability that makes it difficult to feed yourself, a teleoperated robotic arm might help. Small industrial robotic arms have been adapted to wheelchairs for exactly that purpose. With advances in actuator technology, the precision and capabilities of those arms has improved dramatically.

But those improvements, which are supposed to make life even easier for the user, paradoxically make the robots much harder to control. With multiple degrees of freedom and complex grippers, the training and dexterity required to teleoperate the robots makes them less viable as aids to those with certain kinds of disabilities.

Also: This robotic arm for multitasking can be controlled with thoughts

But what if a robot could predict what its operator wanted?

"Think about last time you did a cooperative task with a partner," Henny Admoni, head of the Human and Robot Partners (HARP) Lab at Carnegie Mellon University, told me on a recent call. "Take cooking in the kitchen. You're able to give off nonverbal cues to a partner as you work across from each other."

Someone glances at a spatula, the intuitive and attentive partner passes it over without a word.

"The way we do this is by picking up non-verbal cues," explains Admoni. "What I'd like to do is make robots as good at doing that as humans are."

It's a fascinating challenge, part of the still-emerging research field of Human Robot Interaction (HRI). Drawing from disciplines like cognitive psychology, anthropology, and ethnology, robotics researchers are studying how people interact in the real world and express their intentions.

"The beautiful thing about HRI is it's so interdisciplinary," says Admoni. "These are very open research questions."

Cognitive psychology, including a branch called psychophysics, has made headway in understanding how, when, and why humans give off non-verbal cues. But people are complex and display astonishing variability. There's no generalizable model for cue giving and receiving, no dictionary of cues that might be used to program a robot to respond appropriately.

For Admoni, a big part of answer lies in machine learning. Using a baseline understanding of cues, robots can learn to match up perceived cues with resulting human behavior:

We have some sense of what cues certain people pick up on because psychology has given us some insight. Robots can automatically detect those cues and then learn from observing people what effect those are correlated to. Is a grumpy face correlated to a violation of personal space? Is looking around correlated to not having the right tool in front of you? The robots learn over time in a data driven approach.

The science is still young, but what it predicts is the grandaddy of all UX: machines that feel truly responsive to our needs. Whereas the computer or mobile device you're reading this on can only respond to a direct input, such as a keystroke, a computer of the future might anticipate that you're about to share this article with a friend and open a draft email for you. And all that based on your goofy, bright-eyed grin.

Also: What is machine learning? Everything you need to know

The same kind of intention-based controls can be used to make complex robots, such as assistive robotic arms, much easier and more intuitive to control. A human can manipulate a joystick and the robot, in turn, can predict what the human is about to do and help achieve that goal.

The arrangement is called shared autonomy, and it will play a major role in how social robots react to humans in the near future.

And the work cuts both ways. Just as robots will need to pick up on subtle intentional cues to integrate into the human world, they'll also need to transmit those cues to humans if they are to pass as realistic agents.

The research is still in its early days. In addition to her work on assistive devices, Admoni is collaborating with other CMU researchers on a project investigating how robots might be used from stovetop to tabletop in a restaurant. The project is a partnership between CMU and Sony Corporation.

Part of Admoni's job is to figure out how robot servers can safely navigate crowds while carrying piping hot food. Like a good waiter, a robot server will have to read the intentions of the crowd and plan an appropriate path in order to avert a clatter of dropped dishes and scalded patrons.

Also: 5 things to know about soft robotics TechRepublic

"It's very cross-cutting in terms of discipline. We're looking at robotics through the lenses of manipulation, vision, deep learning, AI. And food is such a compelling topic, just a really great domain."

I'm calling it now: It's going to feel weird not tipping your robot waiter after receiving such exquisite service.

Editorial standards