Government agencies managing multiple sources of Internet of Things (IoT) data are most likely to launch edge computing projects to save money and improve city services, according to an IDC government analyst.
Shawn P. McCarthy, research director for IDC Government Insights, predicts that local governments and the military will lead the way in adopting artificial intelligence (AI) to reduce fraud, improve services, and increase compliance. McCarty discussed these predictions as well as the current market during a webinar, “Edge Computing, 5G & AI: Government’s Exponential Perfect Storm.”
SEE: 5 Internet of Things (IoT) innovations (free Pdf) (TechRepublic)
“Integrating this data can save money, and that’s where you’re going to see most of the growth happen,” he said.
The benefits of distributed AI are a positive return on investment, lower costs, focused efficiencies and data-driven decision making, but the required processing power is a significant challenge, he added.
Artificial intelligence is already improving government operations from fine-tuning maintenance schedules for Air Force planes to reducing pedestrian deaths on city streets. Other use cases for edge computing and IoT projects include:
- Monitoring river flow levels
- Optimizing building temperature by adjusting AC and venting
- Improving crowd control and response
- Collecting public health data
McCarthy said some data analysis must be done at the edge to get the benefit of real-time analysis.
“When processing is done at the data center or in the cloud, you don’t gain speed, you don’t boost agility, and you have fewer options for fast automated responses,” he said.
He described the AI opportunities along the entire spectrum of IoT platform elements as “small footprint AI,” a smart device that does one task very well, as compared to Alexa, an AI device that has many capabilities.
No one-size-fits-all solution
IDC’s Global IoT Decision Maker Survey 2019 found that federal government agencies are doing the most edge computing with about 25% of all IoT data collected and processed at the edge. Cities have the most edge-based computation requirements and process about 20% of the data collected at the edge. States and territories are lagging in this area but they also don’t have to be outfront because they are not collecting the data. Across federal, state, and local governments, most data is processed through a hybrid model with some processed at the edge and the rest sent to a data center.
SEE: IoT: Major threats and security tips for devices (free PDF) (TechRepublic)
Because of the complexity of the systems that these entities are managing, city managers will need systems integrators to make these projects a success.
McCarthy also shared these major IoT trends across all industries:
- Platforms will become more verticalized with an increased focus on the use case instead of general purpose offerings.
- Edge computing will become more important due to the need for real-time insight, cost reduction, and increased security.
- Security and privacy will continue to be the top data management concerns as improving data sharing and governance.
- Government agencies will need a specialized systems integrator to manage complex multi-vendor deployments and infrastructure modernization.
McCarthy said progress in edge computing is moving at different rates across different levels of government, reinforcing the idea that there is no one-size-fits-all solution.
“The best growth opportunities are at agencies with multi-structured data sources that can be used to create new solutions, build better services, and optimize operations,” he said.
To make the case for IoT investments, city leaders should determine whether the project could reduce existing IT costs and whether the solutions could improve service levels to residents.
McCarthy also suggested this list of questions for government agencies trying to figure out how to get started with IoT projects:
- What data lives within current facilities, and what data do you want that exists at other locations?
- How much data is there, and how fast is it growing?
- How much time does it take for those data collections to traverse the network?
- What additional data sources do you plan to add?
- Will the volume of the new data make it necessary to move processing power out into the network?
- Can you do your analysis and make decisions near where the data lives?
- Is distributed processing needed at multiple locations?
- Is the data stored in a way that makes it easily available for AI and machine learning or is secondary data prep needed?
- Are you not using some of the data that would otherwise be useful to save time effort and network stress?
Understanding these issues is a major step toward deciding if distributed AI is needed, he added.