Five boring but crucial tasks required for success at scale with AI

PwC's 2020 predictions suggest companies are more worried about being disrupted than doing the disrupting.

Collaboration is the key to making AI and IoT work

The biggest barrier to implementing artificial intelligence at scale is not about the technology but with humans and business practices. In a new report, PwC found that companies are scaling back AI ambitions.

What's the challenge? Measuring ROI, getting a budget approved, and training current employees. In the 2020 AI Predictions report, PwC points out operational barriers and reinforces the need for a sustained commitment from executives.

Senior leaders know the wave is coming: "Ninety percent of executives surveyed believe that AI offers more opportunities than risks, and nearly half are expecting AI to disrupt either their geographical markets, the sectors in which they operate, or both."

At the same time, only 12% of the 1,062 survey respondents said they were planning to disrupt their own industry, which shows that "nearly four times as many respondents fear disruption as plan to be disrupters themselves."

The report recommends these five broad priorities for AI projects in 2020:

  1. Get on board with boring AI
  2. Rethink upskilling
  3. Lead on risk and responsibility
  4. Operationalize AI — integrated and at scale 
  5. Reinvent your business model

The most interesting and actionable part of the report are specific to-dos that go with each priority. These are the tasks that are easy to delay or ignore. Without taking these steps, it's much more difficult to make AI work at scale and to transform day-to-day operations as well as a long-term business model. Here are five AI to-dos that should be on your project list.

Create an AI intake strategy

This is one of the more boring but important parts: Identify where AI can have the greatest business impact, and build the technical and human capabilities required to succeed. Point AI efforts at paperwork that no human wants to read anyway.

SEE: Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)

The authors of the report say that the best way to use AI to operate efficiently and increase productivity is to use the technology to extract information from tax forms, bills of lading, invoices, and other documentation. Look for tasks that are common across the business to create reusable AI solutions, such as a model for processing unstructured text. 

Set a multilingual target

This is part of the rethink upskilling work—if you are only offering tech training to your non-tech employees, you're doing it wrong.

Collaboration across business units is critical in general for transformation technologies, and cross-team upskilling is part of that, too.

The report recommends making it a priority to give different specialists the ability to speak the language of other specialities. To encourage cross-functional collaboration, companies should "create 'multilingual' teams, with data engineers, data ethicists, data scientists, and MLOps engineers part of application development and business teams." Also, train technology team members on the business side so that everyone is speaking the same language.

As 50% of executives in the survey recognized, team members "need to give immediate opportunities and incentives for people to apply what they've learned, so that knowledge turns into real-world skills that improve performance."

Build up your AI risk confidence

PwC found that only about one-third of respondents have "fully tackled risks related to data, AI models, outputs, and reporting." The report authors suggests that companies back up their words with actions. PwC's Responsible AI Toolkit lists these five dimensions of responsible AI: 

  • Governance
  • Interpretability and explainability 
  • Bias and fairness
  • Robustness and security
  • Ethics and regulation

The survey found that about 50% of executives are taking on the "explanability" challenge.The report also recommends working with risk and compliance functions to develop the right AI standards, controls, tests, and monitoring. Companies also need a budget for AI assurance, similar to those for cybersecurity or cloud security.

Make your data trusted data

Data must be "accurate, standardized, labeled, complete, free of bias, compliant with regulations, and secure." This step is crucial to making AI operational at scale. The biggest data challenges are:

  • Integrating data from across the organization (45%)
  • Integrating AI and analytics systems (45%)
  • Integrating AI with IoT and other tech systems (43%)

The survey found that only one-third of respondents said labeling data was a 2020 priority. The report recommends that even if AI efforts are focused on a single function or process, it's essential for companies to gather secure, quality data from throughout (and outside) the organization.

Monetize cognitive assets

This to-do is part of revamping the business model work. Businesses should create unique data assets and cognitive assets: AI models that encapsulate a company's experience and expertise in a specific domain.

To see ROI from AI projects, business must be able to capitalize on the insights and outcomes that these new assets offer.

These to-dos are so critical because "AI development is very different from software development and requires a different mindset, approach, and tools." Because AI models development requires a "test and learn" approach, business teams must also be continually learning and refining their approach as well.
Among this year's 1,062 survey respondents, 54% hold C-suite titles, more than half work in IT and Technology functions, and 36% were from companies with revenues of $5 billion and up. The survey was conducted by PwC Research.

Also see