Innovation

The key to human-robot relationships is vocalization, says Carnegie Mellon

Carnegie Mellon's robots are able to listen to and repeat instructions from humans. Learn why it helps build trust.

Manuela Veloso
Image: Hope Reese/TechRepublic

As we enter the era of human-robot relationships, where machines will have a large role in assisting humans in nursing homes, schools, on cruise ships, and anywhere else that extra guidance is needed, questions about communication remain. How will we issue commands? How can we ensure that robots have properly heard and understand our instructions? And how can we know if they run into problems?

At Carnegie Mellon, professor Manuela Veloso has been working with "co-bots"—robots that assist faculty, staff, and students. The co-bots are, essentially, laptops on a stand with a basket and wheels—no limbs. More than mere experiments, the robots are part of daily life in some corners of the university, guiding students and visitors around buildings and delivering items throughout large buildings, traveling up and down elevators and down long corridors. How does it all work? Vocalization.

TechRepublic spoke with Veloso about the intentional design of the co-bot, and how they are able to bridge the trust gap with humans.

Why is vocalization important in communicating with robots?

These robots function in a different space than our space. They are all into having maps and coordinates and probabilities and handling a lot of numerical data, be it pixels or images. It's all another world. If we enable these agents to actually make decisions, to turn left and to turn right, to bring you this and go that way, you start questioning what are they doing. I've been trying to understand this gap of representation between the actual robot and people who understand language.

SEE: Why robots must explain, listen, and ask for help (TechRepublic)

I've been investing a lot into this concept of verbalization, in which the robot can verbalize, or speak up, when we ask questions about what they did and why they did it. It is a little bit of a challenge when you have black boxes that just process data and give us an outcome, and we don't even know why something happened. We have to understand how to break the gap of the language of the machine, which is intelligent, and the human, which is also intelligent. It's almost that we have to be able to speak the same language or to try to understand each other.

Your robots are moving around without GPS.

They're sensing the walls around them and the spaces around them. They are typically computing where are the walls in the building. For example, they will not think that the person is a wall. They will not think that the table is a wall. It has to be a vertical kind of surface and it has to be a plane. When they detect these walls, what they detect also is how far away you are from that wall, so they have distance.

It's almost as if you are in Paris and you see the Eiffel Tower at a distance, right? Then you are going to know, "I have to be somewhere in the middle." You kind of know where you are, but seeing the landmark in the distance orients you.

You can't move these robots into some other place, right?

It's a very deep, technical question. It's called the "kidnapping robot" problem. For example, your child falls asleep in your bed, and you put it in his or her bed. Then, in the morning they wake up and say, "Where am I? I did not move by myself from one to the other." That's the kidnapping problem in robotics.

SEE: 6 ways the robot revolution will transform the future of work (TechRepublic)

We have to have other algorithms for this thing called global localization. You wake up, you open your eyes and we have an algorithm to actually self-localize. There is a scan, it sees the environment, and then matches that environment to multiple places. Basically, these robots can be lost. They can have uncertainty about their location, but they also have mechanisms to resolve that uncertainty.

Co-bots at Carnegie Mellon do tasks like guiding people, transporting things from one place to another, exhaustively getting data by navigating environments over and over. Soon we will have robots in indoor environments to help at that level. Imagine you get to the supermarket or you get to a museum and you recruit a robot to come with you. You can say, "Take me quickly, I don't have much time." And there there you go. Or, "you wait here, I'm going to do something." You have this kind of companion that is smart, not just your cart that you push around. At the end, you say "bye bye, co-bot." It goes back and is available for the next customer to use.

It could be especially important in hospitals.

I've been working on that area, on service. Having robots check on patients. Or bringing treatments and medication. The concept of just having a body that can go there—not just a static camera that is in some room. I think it's more like thinking about the true meaning of AI, which is beings that are not human. Eventually, you are not going to also ask the robot, "So Manuela, what are you good for? Why are you here?" I just am. You just become a being.

So what about the design of the robot? Your robot with the basket looks very mechanical. What happens when it turns more human looking?

My husband keeps saying, "Oh, you should make it look nicer." I have somehow stepped back slightly from having it look nicer, exactly to have people understand that it's just a refrigerator on wheels or a laptop on wheels. It's not really yet someone that will care and someone that will engage in some sort of healthy conversation. It's basically doing tasks for you. I've been doing a lot of research about appearance.

The thing that I don't like in other robots is the fact that they always look the same. We've done a lot of research on appearance that you can program. In fact the co-bot 3 has a column of lights and you can make them flashy—red if it's in a hurry, blue if it's peaceful and has a lot of time—so the appearance reveals what's happening in the robot.

The appearance, for me, is not as much about smiling or not smiling. It's more about having ways to understand the internal state of the robot. Otherwise it is completely opaque. What is that thing doing?

What happens when we start treating a robot like a human?

I would like my robot to just go to the supermarket and get me something, or take me to the physics department, or go down the corridors and figure out what's happening. My concern, when I buy a robot that I did not program, is when you don't know what the robot will do next. Where are they going? You see them move and they should be able to tell you. They should be able to show.

Accountability, transparency. Ability for you to correct and say, "Don't do that." A lot of my research is also learning by instruction. You actually tell the robot, "No, I told you to go there. I meant do it like this." What I'm doing research on is to prepare people for when they actually buy a robot. You buy this robot and it moves by itself. You would like to go, "Go to the kitchen," and the robot goes all the way around the garden. You say, "Come on, I told you to go to the kitchen this way." Something in which you see that the robot is doing things that you may want to instruct or correct. You may want to find out why they did this.

How can you correct it?

We can see where the robot is through an app. There's a schedule that is able to find out if there is time to do this task, and then it asks the robot to do it. And the robot learns. It never does something without the person confirming. When it learns that coffee is in the kitchen, next time it goes, it doesn't have to ask. It's going to offer.

Also see

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox