
“Beyond simply being autonomous,” said Jason Pontin, editor of MIT’s Technology Review, “robots today are expected to live among us, collaborate with us–even understand us.”
Pontin, speaking on Monday to a sold-out crowd at EmTechDIGITAL 2016 at the St. Regis Hotel in San Francisco, was laying the stage for a session on sociable robots, “flexible, more human robots that are mobile, dextrous, and socially assistive.”
While robots have been performing labor in warehouses and factories for decades, the development of social robots is more of a recent phenomenon. And, according to Pontin, the development of these types of robots pose some of the greatest challenges in technology today.
The panel included Maja Mataric, founding director of the USC Robotics and Autonomous Systems Center; Julie Shah, professor at MIT; and Manuela Veloso, professor at Carnegie Mellon.
“The implications of robots in our lives and work are important to consider,” said Mataric.
Mataric talked about “the care gap”–something she sees as a “huge challenge, and a huge opportunity.” Robots, she explained, can inspire people to do physical therapy, providing “motivation for them to do the work.”
They also “encourage socialization, provide encouragement, challenge, and care.” The embodiment of these socially assistive robots, Mataric said, is the key to their success in these areas.
Another important consideration for social robots is how they are designed. Should they have bodies? Mataric said yes.
“Why does embodiment matter?” she asked. Because humans interact differently with a robot versus a screen.
SEE: 6 ways the robot revolution will transform the future of work (TechRepublic)
“Embodiment is what makes us fundamentally human,” said Mataric. “We perceive others and relate to them through our bodies.”
A stroke patient, for example, can be encouraged by a robot to continue to exercise and keep up with rehabilitation.
“She won’t do it on her own,” Mataric said. “She needs hours of help each day.”
Part of why patients work well with humanoid robots is because of the “pride in accomplishing something that you can share with an agent,” she said. “It is inherent to our wiring as social creatures.”
Shah then shared her vision on how humans and robots can work together in the future to accomplish more. How we can “harness the strengths of robots to accomplish what neither human nor robot can do alone.”
“We should be making robots into better teammates,” Shah said.
Today’s “teams,” Shah said, look like the members co-exist, rather than collaborate, she said, which is “far short from being effective teammates.” For example, surgeons give step-by-step commands to assistive robots. What usually happens, she said, is that we “look at human behavior and design robots around it.”
But, she said, there’s potential to collaborate in deeper ways. How? By having robots infer human feelings. “Time spent together influences our expectations,” she said.
She sees this as a paradigm shift for how robots work with us.
Shah’s work looks at human cognitive models to help machines work with us the way people do. “Human cognition is special,” said Shah, “because of the human mind’s ability to process complex info efficiently.”
What do we want machine to infer?
“It’s more than finding right features,” said Shah. “We need machines to process info as efficiently as people do. When someone makes a decision or takes action, that decision is equal to or better than the other possibilities out there,” she said. So “robots can watch us and learn unwritten rules to collaborate like a human team member.”
An example she gave is when robots were trained on a defense task, and were able to learn high-skill tasks through 16 individual game plays. They first took a set of hundreds of experts and pulled the near-perfect score set. So the machines trained on that small data set of how humans would perform.
Continuing in the idea of collaboration, Veloso showed how robots and humans can collaborate at work. Her mobile robots navigated the hallways at the university through a sensory-detection system (not GPS), interacting with staff and students along the way.
At Carnegie Mellon, she said, these robots are not part of an experiment–they’re part of the fabric of daily life. Students will be escorted to offices by robots. The robots, who work in “symbiotic autonomy” with humans, have many limitations, she acknowledged. They can’t open doors. They can’t walk up stairs. They can’t press the button for the elevator.
To solve these problems, the robots must ask for help.
“Can you please push the button and hold the door open?” the robot will ask. It does not know if the person is there, but it tries anyway. “Can you please press 8 and tell me when we’re at the right floor?”
SEE: Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
Or, if a robot is blocked by an obstacle, it can send out an email. “Can someone please come to X to rescue me? I have waited more than five minutes.”
Veloso also demonstrated how the robot, which looks like a laptop on a stand with wheels, can get coffee from the kitchen.
It can find the kitchen, then will ask someone “Hello, can you please put coffee in the basket and press done when I’m ready to go?”
But, aside from simply being a way to work around the robot’s limitations, asking for help has another advantage: It can increase trust.
In a Q&A session at the end of the session, Veloso was asked the steps that must be taken to increase trust in robots.
“The process of generating explanations, and challenging a robot to explain what is happening, will increase trust,” Veloso said. “Robots should be accountable.”
Also see…
- Future jobs: How humans and robots will complement each other (TechRepublic)
- Amazon, robots and the near-future rise of the automated warehouse (TechRepublic)
- Can the presence of a robot affect whether humans behave ethically? (TechRepublic)
- Why China is scooping up robots from Rethink Robotics to solve its manufacturing problem (TechRepublic)