artificial intelligence AI. Technology for working with an intelligent brain computer. Robot human working at laptop. Maintenance engineer working digital display
Image: ZinetroN/Adobe Stock

As artificial intelligence (AI) matures, adoption continues to increase. According to recent research, 35% of organizations are using AI, with 42% exploring its potential. While AI is well-understood and heavily deployed in the cloud, it remains nascent at the edge and has some unique challenges.

Many use AI throughout the day, from navigating in cars to tracking steps to speaking to digital assistants. Even though a user accesses these services often on a mobile device, the compute results reside in cloud usages of AI. More specifically, a person requests information, and that request is processed by a central learning model in the cloud, which then sends results back to the person’s local device.

AI at the edge is less understood and less frequently deployed than AI in the cloud. From its inception, AI algorithms and innovations relied on a fundamental assumption—that all data can be sent to one central location. In this central location, an algorithm has complete access to the data. This allows the algorithm to build its intelligence like a brain or central nervous system, with full authority on compute and data.

But, AI at the edge is different. It distributes the intelligence across all the cells and nerves. By pushing intelligence to the edge, we give these edge devices agency. That is essential in many applications and domains such as healthcare and industrial manufacturing.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Reasons to deploy AI at the edge

There are three primary reasons to deploy AI at the edge.

Protecting personally identifiable information (PII)

First, some organizations that deal with PII or sensitive IP (intellectual property) prefer to leave the data where it originates—in the imaging machine at the hospital or on a manufacturing machine on the factory floor. This can reduce the risk of “excursions” or “leakage” that can occur when transmitting data over a network.

Minimizing bandwidth usage

Second is a bandwidth issue. Shipping large quantities of data from the edge to the cloud can clog the network and, in some cases, is impractical. It is not uncommon for an imaging machine in a health setting to generate files that are so massive that it is either not possible to transfer them to the cloud or would take days to complete such a transfer.

It can be more efficient simply to process the data at the edge, especially if the insights are targeted to improve a proprietary machine. In the past, compute was far more difficult to move and maintain, warranting a move of this data to the compute location. This paradigm is now being challenged, where now the data is often more important and more difficult to manage, leading to use cases warranting moving the compute to the location of the data.

Avoiding latency

The third reason for deploying AI at the edge is latency. The internet is fast, but it’s not real time. If there’s a case where milliseconds matter, such as a robotic arm assisting in surgery, or a time-sensitive manufacturing line, an organization may decide to run AI at the edge.

Challenges with AI at the edge and how to solve them

Despite the benefits, there are still some unique challences to deploying AI at the edge. Here are some tips should you consider to help address those challenges.

Good vs. bad outcomes in model training

Most AI techniques use large amounts of data to train a model. However, this often becomes more difficult in industrial use cases at the edge, where most of the products manufactured are not defective, and hence tagged or annotated as good. The resulting imbalance of “good outcomes” versus “bad outcomes” makes it more difficult for models to learn to recognize problems.

Pure AI solutions that rely on classifying data without contextual information are often not easy to create and deploy because of a lack of labeled data and even the occurrence of rare events. Adding context to AI—or what is referred to as a data-centric approach—often pays dividends in accuracy and scale of the final solution. The truth is, while AI can often replace mundane tasks that humans do manually, it benefits tremendously from human insight when putting together a model, especially when there isn’t a lot of data to work with.

Getting commitment up front from an experienced subject matter expert to work closely with the data scientist(s) building the algorithm gives AI a jumpstart on learning.

AI cannot magically solve or provide answers to every problem

There are often many steps that go into an output. For example, there may be many stations on a factory floor, and they may be interdependent. The humidity in one area of the factory during a process may affect the results of another process later in the manufacturing line in a different area.

People often assume AI can magically piece together all these relationships. While in many cases it can, it will also likely require a lot of data and a long time to collect the data, resulting in a very complex algorithm that does not support explainability and updates.

AI cannot live in a vacuum. Capturing those interdependencies will push the boundaries from a simple solution to a solution that can scale over time and different deployments.

Lack of stakeholder buy-in can limit AI scale

It’s difficult to scale AI across an organization if a bunch of people in the organization are skeptical of the benefits of it. The best (and perhaps only) way to get broad buy-in is to start with a high value, difficult problem, then solve it with AI.

At Audi, we considered solving for how often to change the electrodes on the welding guns. But the electrodes were low cost, and this didn’t eliminate any of the mundane tasks that humans were doing. Instead, they picked the welding process, a universally agreed upon difficult problem for the whole industry and improved the quality of the process dramatically through AI. This then ignited the imagination of engineers across the company to investigate how they could use AI in other processes improving efficiency and quality.

Balancing the benefits and challenges of edge AI

Deploying AI at the edge can help organizations and their teams. It has the potential to transform a facility into a smart edge, improving quality, optimizing the manufacturing process and inspiring developers and engineers across the organization to explore how they might incorporate AI or advance AI use cases to include predictive analytics, recommendations for improving efficiency or anomaly detection. But it also presents new challenges. As an industry, we must be able to deploy it while reducing latency, increasing privacy, protecting IP and keeping the network running smoothly.

Camill Morhardt
Camille Morhardt, director of security initiatives & communications

With over a decade experience starting and leading product lines in tech from edge to cloud, Camille Morhardt eloquently humanizes and distills complex technical concepts into enjoyable conversations. Camille is host of What That Means, a Cyber Security Inside podcast, where she talks with top technical experts to get the definitions directly from those who are defining them. She is part of Intel’s Security Center of Excellence and is passionate about Compute Lifecycle Assurance, an industry initiative to increase supply chain transparency and security.

Rita Wouhaybi
Rita Wouhaybi, senior principal AI engineer for the IoT group

Rita Wouhaybi is a senior AI principal engineer with the Office of the CTO in the Network & Edge Group at Intel. She leads the architecture team focused on the Federal and Manufacturing market segments and is helping to drive the delivery of AI edge solutions covering architecture, algos and benchmarking using Intel hardware and software assets. Rita is also a time-series data scientist at Intel and chief architect of Intel’s Edge Insights for Industrial. She received her Ph.D. in Electrical Engineering from Columbia University, has more than 20 years of industry experience, and has filed over 300 patents and published over 20 papers in acclaimed IEEE and ACM conferences and journals.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays