After the launch of customer service bot Operator, Andy Wolber spoke with Paul Adams, vice president of product at Intercom, about customer support, bots, and the future of voice interfaces.
You've probably seen one of Intercom's products when clicking on the seemingly smiling customer help button displayed in the corner of many software company web pages.
The company recently launched Operator, which brings a "bot with manners" into customer support conversations. It's part of Intercom's suite of software that helps businesses respond, educate, and engage with customers.
Paul Adams, Vice President of Product for Intercom spoke with TechRepublic about Intercom, bots, and the future of voice interfaces in August 2017.
TechRepublic: Talk to me a bit about your customers. What's the profile of companies that use Intercom?
Paul Adams: We have about just north of 20,000 businesses using Intercom. The majority of them are small companies, maybe less than 50 employees. Many of them are in startup programs and incubator programs like Y Combinator and so on.
The majority of them are subscription business, like SaaS (software as a service) companies. It's a very nice match between the products we build and the idea that you want to retain customers: you want customers to stick around and continue paying you month by month.
Increasingly we have much bigger customers using Intercom such as Microsoft, Spotify, Stripe, Shopify, and other very large companies as well. Typically, there is a team inside these large corporations that are using Intercom.
TechRepublic: With Operator, I felt that you were very sensitive to keeping bots and people in their appropriate roles. How does Operator fit into your product line overall?
Paul Adams: Today, people still consider customer service an expense or a cost center. Customer service teams are always evaluated on how expensive it is to employ people to talk to all the customers. Since Intercom's inception, we've believed the opposite of that; that customer service is a strategic asset for a company. That's the part of the business where you have people talking to customers and learning about what's not working.
Operator came about because if you look at the last two or three years, you started to see a lot of technology being built around bots: how bots could interact with people and how bots could replace some of the conversations that customer service teams are having with people. We felt that most of these companies weren't thinking about it in the right way. They consider customer service to be an expense. The bots were basically designed to replace people, because if you can replace the people with a computer, it's much cheaper. As a result, I think a lot of people and companies gravitated to this idea because it sounded like a good way to make the business money.
What you ended up with was a lot of bots that were badly designed and trying to do things that bots just can't do yet. A lot of it, I think is due to the maturity of the technology available. There are things that people are better than computers at today, and there are things that computers are better at than people. I think a lot of companies in the early aspirations of building bots started to draw the line in not quite the right place. You end up with things where bots are quite rude. They interrupt. They ask the same questions over and over. Even with technology like natural language processing, computers can't really understand if you're really angry or upset, or just need to be heard. Very human things.
We say Operator is the first bot to have manners. Rather than bots trying to replace the human and save money, we actually think it jumps in only at appropriate times, as a supplement or complement to the conversation that a human might be having. Or if a human isn't around Operator will jump in, collect some information, and make sure that the business gets in touch with the person.
TechRepublic: I saw the words "machine learning" tossed in the text around Operator. What role does machine learning play in Operator?
Paul Adams: The way we've been thinking about this is that Operator has a set of skills. We chose this term quite deliberately because it's similar to Amazon's Alexa, which also used the term skills. We built a number of skills ourselves that go into Operator. Some of them use machine learning, some of them don't.
Machine learning is one of those technologies where I think a lot of people have hyped it beyond what's warranted. It's definitely exciting and we're doing a lot of investing in machine learning here -- building, trying, and experimenting with lots of different things. What they do is they look at lots and lots and lots of conversations from companies using Intercom interacting with and talking with their customers. It starts to learn over time based on that data and based on the conversations between people, and people and bots. We use machine learning to recognize when Operator should interject because there's a high confidence that we can answer the question.
We plan to add skills over time and hopefully at some point, potentially open it out to third parties to build skills the same way that Amazon has.
TechRepublic: How does Operator identify which resources to provide? How is Operator different than search results?
Paul Adams: The way we think about this is that when a person gets in touch with a business via the messenger, there are different sets of circumstances that could be in play. They might have a very simple question like "How much is my bill gonna be this month?" or "How do I reset my password?" Or they might have a really complicated question about how to use a feature in an app. In other cases, honestly, they just could be upset. Maybe something went wrong with something that they bought or ordered. They just want a human to talk to them.
We have two products that we sell for those different scenarios. One is a help desk, which is an inbox basically for a customer support team: Queries come into the messenger for the team to answer them and talk to people one to one.
The second product that we sell is an a knowledge-base product called Educate. That's a product where you can write articles based on frequently asked questions. Over time, the help desk learns the questions that people ask the most often, so you can write articles to answer frequently asked questions. There are circumstances where just sharing an article with a person might be enough to answer a question. For example, if the question was something like, "how do I reset my password?" it's actually the same for everybody.
What our products do now is determine that this is a question that could be answered by an article that the company has in their knowledge base, and then Operator can suggest the article as the potential answer. Then the customer would check out the article, see if it's relevant and Operator asks whether or not the question is answered, based on reading the article. Operator knows if they've read the article or not. They can either say yes or no. If they say, "Yes," that's great, then the conversation is closed and they don't have to talk to person inside the company. If they say, "No, the question wasn't answered," then the question will be put through to the help desk and a person will actually answer them.
TechRepublic: Looking forward, what role do you see for voice bots or that sort of voice interaction in the future?
Paul Adams: I think it's big. I think it's really important. I think voice in general is going to be an increasingly commonplace interface for talking to computers. Screens -- and certainly mobile phone screens -- have been the dominant for like 10, 20, 30 years. I think the technology has come on so far in the last three to five years, thanks to Google and Amazon in particular. Just having the voice recognition being able to parse different accents and intent from people's voice. Then on the hardware side, there's things like the Echo or Google home. I think there's been a little bit of a tipping point in terms of the maturity of the technology.
I think we'll see it employed in more places. For example, in cars and in the home. The interesting thing for me is what happens after you input a query. Google, Amazon and Apple's Siri and other companies have done really well on that side. But voice as an interface is only as useful as the results you get back. The first step is taking the query, the next step is giving back an answer. I think we'll start to see more and more and more voice interfaces.
TechRepublic: What are the implications of voice interfaces for Intercom's product evolution?
Paul Adams: It's very early for us to do something there. One of our engineers built a prototype of Intercom for Alexa on the Echo, which is quite cool, but the vast majority of our customers use Intercom on a big screen, not on a mobile phone, because they're at work. They're working on a customer support team, or they're working in a marketing team. Working in the office or even at home, people will be using their laptop or a big monitor. It's early for us but we're definitely exploring it.
What's your experience with customer support systems been? Let us know in the comments or on Twitter (awolber).
- 3 ways companies can simplify customer service and make people happier (TechRepublic)
- In-app messaging giant launches new business features that could be coming to your favorite app (TechRepublic)
- Why the lock screen is the next battleground in mobile (TechRepublic)
- Three simple steps to providing better customer support (TechRepublic)
- Here's how to start on your company's mobile strategy (TechRepublic)
- No. 1 takeaway in Meeker's 2017 report: Is your business ready for voice? (TechRepublic)
- Why chatbots are somewhere between overhyped and overblown (TechRepublic)