O’Reilly just released its annual AI Adoption in the Enterprise survey, and the results are mostly unsurprising. For example, data scientists from organizations with mature artificial intelligence practices tend to turn to scikit-learn, TensorFlow, PyTorch and Keras. Also, supervised learning (82%) and deep learning (67%) were the most popular techniques used by survey respondents, whatever their phase of AI adoption.
Even less surprising, though perhaps more frustrating? The biggest barrier to enterprise success with AI is difficulty finding people with the requisite skills. This is the exact same thing that plagues adoption in every technical market as a technology takes off. The biggest barrier to technology adoption, in short, is people.
People are people
Of course, things don’t start this way. A year ago O’Reilly’s survey uncovered company culture and difficulty figuring out use cases as the biggest barriers to AI. Once an organization settles these and starts to move forward, they’re soon plagued by the same thing that plagues all popular products or processes early in their adoption curve: not enough people know how to make sense of them. Hence, O’Reilly’s survey found a “lack of skilled people” is the biggest barrier to AI adoption (Figure A).
Within that skills gap, machine learning modelers and data scientists (52%), understanding business use cases (49%) and data engineering (42%) reflect the biggest needs. Years ago, getting people to manage the necessary infrastructure for AI workloads would have been an issue, but this year just 24% of respondents cited the problem, “hinting that companies are solving their infrastructure requirements in the cloud,” as the report surmised.
And yet there’s hope.
SEE: IT leader’s guide to deep learning (TechRepublic Premium)
One source of hope is time: Over time, companies figure out how to solve for skills gaps, even as the market responds with new ways to train people. One way this is happening in data science is through tooling. Today experienced AI folks tend to use scikit-learn, TensorFlow, PyTorch and Keras, with each scoring over 45% in the survey (scikit-learn and TensorFlow both hit 65%). But this is the minority. For the past two years, those self-identifying as “mature” in their AI practices (i.e., had projects in production) constituted roughly a quarter of those responding. And for two years, those who are “evaluating” have hovered at roughly a third.
For those evaluating or simply “considering,” they tend to use less scikit-learn and more AutoML-based tooling from cloud vendors. “At risk of over-overinterpreting,” the report authors noted, “users who are newer to AI are more inclined to use vendor-specific packages [and] more inclined to use AutoML in one of its incarnations.” For “mature” respondents, when asked about AutoML products, 51% said they weren’t using AutoML at all. One way of reading this is that those who have less experience with AI (and, presumably, fewer skilled people to help) tend to use AutoML to help them get involved with AI without needing to try to hire the requisite talent to use something like TensorFlow.
In sum, we have an AI skills shortage, just like we used to have an Apache Hadoop/R/etc. shortage. It’s just the nature of technological process: Technology advances faster than we, as people (and organizations) are able to make use of it. AutoML seeks to bridge the skills gap by “making Machine Learning tasks easier to use less code and avoid hyper tuning manually,” as Saurav Singla wrote.
Disclosure: I work for AWS, but the views expressed herein are mine.