Over the past few months, I’ve been invited to test drive a few AI solutions. Some of these services and apps have been mobile, some of them email-driven. All of them were billed as conversation-style cause and effect decision making/appointment scheduling tools, powered by some form of “artificial intelligence” or another. To a certain degree, there was both failure and success.

For example, I beta tested one particular service that was handled completely via email (X.ai). The purpose of the service was to use AI to make scheduling meetings and appointments easier. If you wanted to schedule an appointment, you emailed the person who was to attend and cc’d the AI service on the email. The AI service would then take over and try to handle the scheduling of the appointment. The end result was confusing at best. The AI would switch names, so you never knew who you were talking to, and the process inevitably would take longer than a quick standard email (or phone call) back and forth. It was frustrating and, in the end, failed to pull off what it claimed. Although it placed a scheduled block of time on my Google Calendar, that block was quickly made irrelevant as the email exchange continued and no conclusion drawn.

Yes, in that instance, I was testing the ability of the AI; because in the real world, things are a bit more nuanced than artificial intelligence can currently handle.

Google’s Assistant is a different sort of beast. You ask it questions and it answers. You narrow down your questions and it responds in kind. You tell Assistant to do things, and it does. But Google doesn’t promote Assistant as artificial intelligence. Google Assistant is nothing more than a next-gen, chatbot-driven, personal digital assistant. And it works incredibly well.

That pesky Turing test

The litmus test for artificial intelligence is the Turing test. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine. To successfully pass the Turing test, the evaluator would be unable to distinguish the machine from the human being.

During my evaluation of X.ai’s offering, I approached it with an eye to the Turing test. I was fairly certain the service wouldn’t pass muster and I was right. Somewhat. The problem was that the particular solution that would switch between AI responding and an actual person responding. This, of course, completely confused the issue to the point where I had to give up the testing. Understandably, X.ai was in beta at the time, but the application of an AI solution simply to assist in the scheduling of appointments seems a bit off kilter to me.

Other uses of what companies are calling “AI” are beginning to flood our bandwidth. Most of these solutions are in the form of digital assistants, wherein you ask a question (or make a statement) and the “AI” responds based on keywords from your statement/question. Make no mistake, these are chatbots, not artificial intelligence. Why? Two words: Turing test.

AI or chatbot?

My overriding question is simple: Is AI what the purveyors of mobility should be focusing on? I understand the need for the digital assistant. We want our mobile life to be as simple as possible. But the solutions we are seeing are really nothing more than chatbots.

Lynn Parker, director of the division of Information and Intelligent Systems for the National Science Foundation, states that artificial intelligence is:

“…a broad set of methods, algorithms and technologies that make software ‘smart’ in a way that may seem human-like to an outside observer.”

That’s actually not terribly far off from the true definition of AI, which is:

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

With Parker’s definition, then what we are seeing with digital assistants and chatbots would be considered artificial intelligence. But my problem with the current “AI” solutions lies in the lack of any machine learning.

For example. As you use a solution like X.ai, you soon come to realize the software isn’t learning about your habits or how you and the person you are trying to schedule an appointment with go back and forth. Wouldn’t a true AI solution begin to learn the patterns exhibited between the two users and adjust accordingly? I say, yes it would. Just like certain Android home screens eventually begin to adjust what they present to you based on your history, AI should also make similar adjustments. That is not the case.

Learning is the key

And yes, I realize I am picking at some rather fine hairs here; but this cuts to the heart of my matter at hand. If you’re going to claim your service is driven by artificial intelligence, then that service should, over time, automatically tailor itself to your needs and habits, not just be another chatbot that you can speak or email commands to and receive a response based on keywords.

If companies are going to create solutions to improve our mobile experience, then the solution has to evolve, has to learn. I don’t believe these solutions need to improve so much that they pass the Turing test, but if they are going to be touted as artificial intelligence, then the intelligence aspect needs to come into play at some point. The problem with this, on a mobile device, is that the intelligence of the solution would have to come from a remote source. Anyone who knows even a fraction of what there is to know about security won’t want third-party intelligence wherein the learning factor in the equation is occurring on a remote machine. In other words, to satisfy security needs, the learning would have to take place locally. That’s not going to happen. Not on a Turing test-level AI, on a mobile device.

In the end, what we are looking at are digital assistants that do a respectable job of making our mobile life a bit easier. They are not truly intelligent and probably never will be. But this is where we are; chatbot intelligence masquerading as the real deal.

Conversation simulated, intelligence need not apply.