TechRepublic’s James Sanders spoke with Oblong Industries John Underkoffler about how smart assistant technologies from data-driven companies like Google and Amazon are leading the market, while Siri and Cortana are falling behind. The following is an edited transcript of the interview.

James Sanders:Apple’s Siri and Microsoft’s Cortana are really closer to the back of the pack for smart voice assistants, while data driven companies like Google and Amazon are leading the race. What does the future hold for smart assistants and how they integrate into people’s lives?

John Underkoffler:I think we’re at a pivotal moment for the larger field let’s say, that smart assistants might be one example of, and then a set of enabling technologies like machine learning, I refuse to say AI, that tend to power them. And I think the danger is this. The way people design these bits of software, and they’re new, it’s kind of a new category, implies to the user that it’s already a completed work. That the category is complete. That we know everything there is to know about what it means to be a smart assistant. And that all you need to do is trust it. And I’m not suggesting that big software manufacturers are literally saying this, but the implication through the way the UI is built and how these systems are talked about do have that flavor. When in fact, we still need to learn as much as we can as quickly as we can about what a really useful smart assistant would do. What it would be like. What it would be like to interact with it. What assumptions it should and shouldn’t be able to make about you, as the presumably valuable actual human and user, and so forth.

So, these should be the 18 months or the next three years of intense learning. A step of intense learning that exists as a partnership between software manufacturers and software users. I think it would be really exciting if someone just came out and said, “We’re gonna learn as much as we can with your help. We’re gonna experiment a lot and we’re gonna figure out what a genuinely useful smart assistant is.” But we’re not doing that, and so you get into all kinds of dangers.

SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)

Really kind of boring jejune dangers like, well people get used to the UI’s that you’ve built somewhat haphazardly with best intentions and best assumptions, but without enough learning. And now those are kind of locked in just because they’ve become standards.

Then there’s more pernicious dangers that also ought to be looked at quite a lot, which is how are we training the machine learning algorithms that underlie these smart assistants? Is your digital bill of rights or your digital bill of rights being abraded through the process? What are the questions we should be asking and answering that we aren’t?

And so, we’re missing a really important and what could be a really valuable step in co-creating this new category of software. I’m not sure if any of the smaller players are taking a different approach, but I think it bears some thinking about.

James Sanders:What’s your philosophical opposition to the term artificial intelligence?

John Underkoffler:I spent 15 years knocking around on the inside of a weird white tile IM Pay building with Marvin Minsky. Not always in direct contact with him, but he’s the kind of presence that you can’t help noticing. And when he was still alive, was an amazing mind to interact with and bounce off of. And there is a history of AI. AI was not born in the last three years. AI has a long and venerable history. A history of disappointments. But starting in the 50s the intent of artificial intelligence, and I think we ought to give credit to the people who named it for also being allowed to define what it means, the intent was to build artificial minds. Artificial consciousness. Not just algorithms that help with one thing.

And the distinction is an important one. It’s a distinction that has held different names over the years. Sometimes hard versus soft AI. There have been other attempts to delineate a full artificial mind from let’s say a machine vision algorithm that can find dogs are bruised apples on a conveyor belt, or something very, very limited like that. And then along the way one particular kind of machine learning, which Minsky and Seymour Papert famously wrote about and others worked on, and itself was a disappointment. The kind of neural net thing back in the 70s, 80s, and even in to the 90s and basically abandoned turned out to actually be workable more recently. Just once machines got bigger, data sets and training sets got bigger. Processes got faster, and memory got larger and the rest of it.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

So this thing got resuscitated, turned out to be really good at a bunch of stuff. But it is a sub category of machine learning, a particular one. A particular implementation. This is being unfairly reductive, I know I’m doing the same thing that I’m accusing others of doing. But that’s how it looks and feels to me. And we’re taking that tiny little advancement, which is significant and important, but we’re taking that advancement that’s tiny by comparison to the huge goal that AI originally had, and we’re calling it AI.

It’s like saying, “Look at this fantastic glove compartment. I’ve invented a flying car.” And there’s no doubt that a flying car will need a glove compartment, not at least perhaps for your air discomfort bags and a place to store them, but the glove compartment is not all that it takes to have a flying car. And you’re actually in substantial danger if you act and comport yourself and travel around the city pretending that the glove compartment is a flying car.

SEE: Beyond Minority Report: Why William Gibson’s Neuromancer points to the future of UI (TechRepublic)

So, that’s the kind of situation to use kind of unfairly ridiculous analogy that I think we’re in. And I think words matter. Vocabulary matters. Language matters. What do we call the thing that was supposed to be AI, once we finally get interested in going back there? I don’t know.