Before AI is a human right, shouldn't we make it work first?

Marc Benioff wants artificial intelligence to be the newest human right. Good luck delivering on that.

istock-675938062.jpg
Image: Zapp2Photo, Getty Images/iStockphoto

Salesforce CEO Marc Benioff just declared artificial intelligence (AI) a "new human right" at the World Economic Forum in Switzerland last week. Now if only he could deliver on that "right."

SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

Where's the on switch?

Benioff warned that AI-powered countries and companies will be will be "smarter," "healthier," and "richer," while those less generously endowed with AI will be "weaker and poorer, less educated and sicker." I guess he hasn't seen the AI that currently powers the Western world--you know, like IBM's Watson, which one of its engineers characterized as "like having great shoes but not knowing how to walk."

Not that IBM is alone--take a walk through the transcripts of public companies' reporting earnings, and you'll see artificial intelligence mentions on a precipitous rise. Look around the real world, however, and finding true artificial intelligence is an exercise in futility. Even the companies packed with PhDs like Google seem to only be able to muster advertising that feels like weak pattern matching.

If this sounds like a diss, it's not. AI, it turns out, is really, really hard.

So maybe, just maybe, at some point in the distant future populations will be deprived of the AI riches Benioff was promising. But not yet. Or anytime soon.

SEE: Artificial intelligence: A business leader's guide (TechRepublic download)

How do you give away all that?

Nor is it clear what, exactly, countries should be doing to offer AI to the masses. As Sergii Shymko put it to me, "AI is a software (running on a hardware). Access to AI is as much of a human right as access to software is." While the United Nations has declared internet access a fundamental human right, declaring "access to software" as a human right seems much more nebulous.

And hard.

Shymko explains why:

AI is a bit different from ordinary software in that it requires vast computation powers and even more so training on enormous datasets. Only a number of big companies possess such resources and are in position to monopolize AI. In that sense free access to AI may need regulation.

It's one thing to insist that companies like, say, Google, give free access to its algorithms, but quite another to figure out how to do that in practice. After all, says Shymko, AI is "no different from possessing supercomputers with powers way beyond what ordinary people/companies can afford, thus being at disadvantage. It's up to the owner to open access to the hardware and its proprietary algorithms or not." People in rich and poor countries are always going to be excluded from the AI club, because only the richest, savviest companies can afford to assemble the necessary hardware and know how to program it appropriately.

SEE: The Davos crowd had high-minded talk about AI, stay tuned for the action (ZDNet)

Governments can insist upon open access to algorithms, but figuring out how to regulate it promises to be a morass of conflicting priorities and well-intentioned but likely misguided policies. Governments have thus far proven to be poor predictors of the future (e.g., regulating Microsoft's monopolistic stranglehold on personal computers even as the world largely looked past them to mobile phones).

So, sure. Go ahead and declare the overhyped, underperforming AI that we have today, and the presumably better AI of tomorrow, a "new human right." But actually making it a useful right, and one that gets distributed equally, would occupy all the time and billions that Benioff and any number of his friends could come up with.

Also see