Machine learning is a powerful tool, but it’s not always easy to implement or build into your business. One option is to use it to power conversational self-service tools, for e-commerce or for support. Users use familiar channels to converse with digital agents, which either deliver simple tasks or gather information that’s evaluated and passed on to a human agent.

We’re familiar with digital assistants like Siri, Alexa and Microsoft’s Cortana: voice-driven interfaces to our homes, our phones and our PCs. They’re the most obvious manifestation of modern artificial intelligence, linking cloud services, entertainment apps, the internet of things and familiar productivity tools behind voice recognition and speech synthesis.

There’s many years of computer science research in those platforms, much of it in complex machine-learning algorithms and the massive training sets of data that need the resources of a large company. But we’re not limited to those tools, as cloud platforms like Azure are making the tools used to build services like Cortana available to partners for their own assistants, starting with simple chat interactions in the Azure Bot Framework and moving on up the stack to building your own virtual assistants, like those being developed by BMW and Thyssen-Krupp.

Getting started with the Bot Framework

Azure’s Bot Service is a tool for building and deploying basic conversational systems across many different chat platforms, from the web to Teams to Skype, and beyond. It builds on elements of the Azure Cognitive Services, integrating their APIs into an easy to build conversational framework. You’ll get started quickly with an open source ‘botkit’ that includes emulator tools for testing interactions before you deploy your service.

Building bots is like building any app, you write code that works with existing APIs to parse user inputs, determine intent, and then respond appropriately. That intent could be many things, from asking support questions, to ordering a pizza and checking on its delivery times. You’re not building a general-purpose system — you’re building a very targeted application that has conversational natural language features.

SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)

What makes a bot different from an app built on Azure Cognitive Services is the concept of a Dispatcher. This is a tool that switches users between cognitive service models as a result of what they’re doing. That allows the same bot to support, say, Language Understanding to determine user intent and use that to drive apps and APIs, or QnA Maker to respond to simple support questions.

Once built, a bot is configured to work with your choice of channels, using Microsoft’s Adaptive Cards to provide interactive responses where necessary. You’re not limited to Microsoft-only channels, the Azure bot service works with popular messengers and collaboration services, including Twilio’s range of services. All you need to do is define channels in the Azure Portal and your users will be interacting with your bot.

One useful feature that launched at Build 2019 is an enhanced version of QnA Maker. This tool takes your business’s documentation, extracts key information, and then responds to questions. It’s a useful tool for building and running basic help bots, using FAQs to train the underlying cognitive services. The new release now supports multi-turn conversations, with the ability to respond to users’ follow-up questions.

Rolling your own Cortana with the Virtual Assistant Solution Accelerator

If you want to build your own virtual assistant there’s an open-source Virtual Assistant solution that you’ll use to build your own equivalents of Cortana or Thyssen-Krupp’s Alfred. Building on the previously released enterprise assistant template, it brings together a mix of different tools from the Cognitive Services suite.

You start by downloading the solution from GitHub and then customising it to add your own set of features, including the assistant’s voice and personality. The resulting service is a multi-channel bot running on the Bot Framework, with a set of skills that handle everything from events to working with user accounts. The Virtual Assistant skills will be familiar to anyone who’s used Cortana, as they integrate with the Microsoft Graph as well as Azure services like Maps.

Once you’ve built and trained a Virtual Assistant it’s automatically deployed in Azure, along with all the services you need to support it, including logging and performance analysis tools. All the machine-learning models used are pre-trained, so you’re ready to go as soon as your assistant is online. There’s a strong focus on using Virtual Assistants for hands-free operations, using Azure’s speech recognition tools alongside LUIS, its Language Understanding service. Microsoft is planning to provide specifically designed and trained machine-learning models for common usage scenarios, starting with an automotive language model.

With a pre-trained model like this you don’t need to develop your own custom speech-recognition tools to manage voice control of a car. Once set up, it will allow your virtual assistant to recognise queries about common activities, like navigation or using a paired mobile phone, as well as controlling car features.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

There’s even support for a Cortana- or Alexa-like skills model, where additional functionality is added to a personal assistant as required. Perhaps you’re building an assistant for your business, so you’ll add new features and services as they roll out, as well as taking advantage of new channels as Microsoft adds support. A skills template makes it easier to create and share new features with your assistant’s users.

At Build 2019, Microsoft demonstrated what the next generation of conversational AI might be like, using a video of a possible version of its Cortana personal assistant. Instead of conversations that lacked context, dealing with one thing at a time, the concept video showed a user talking through their calendar, adding meetings, sending information to colleagues, adjusting schedules, all in one conversation.

The heart of this process was a deeper understanding of the context of the conversation, using elements of the Microsoft Graph to link content to people, building a model of relationships and tools that are then interpreted by the underlying machine-learning tools. Part of that is the work done by a recent Microsoft acquisition, Semantic Machines, who are specialists in conversational AI. What Microsoft demonstrated at Build was a look at how Semantic Machines’ work would enliven tools like Cortana, turning it from a relatively simple voice user interface into something a lot richer.

While some of the initial predictions of a glorious natural language interface future may have been overblown, that hasn’t stopped their development. By building on its cognitive service APIs and its Bot Framework, Microsoft is taking an evolutionary approach that customers are finding attractive. There’s no need to run before you can walk, and starting with basic question-and-answer bots gets users used to natural language interactions before you start rolling out more complex conversational virtual assistants.