Location, location, location data. Location is still everything, but the old mantra now requires an additional data-centric addendum if it is to retain relevance for the immediate years ahead.
As we now more formally define location data in the wider realm of edge computing, it becomes difficult to know which label to classify it under. Is edge-related location data an entity, paradigm or source? Is it all three? In truth, it is probably part of the fabric of edge itself — i.e., the data within it describes and depicts the place where edge computation happens, as well as the need for edge sensors, cameras, accelerometers and so on.
SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic)
Building up the layers of this part of the edge fabric requires a mapping source, and herein lies the challenge. The world is not perfect; we don’t live in between a set of uniform blocks and straight lines. As such, our view and ability to map the world will always suffer from some element of imperfection and inaccuracy, which is less than ideal for exacting digital use cases.
Many maps, one planet
Over the past decade, location data has grown exponentially, and users’ expectations for location-based solutions have grown with it. This demand has become challenging and costly for companies to keep up with. Today, companies must select from one of a number of different data-created maps representing the physical world, each with their own strengths and weaknesses.
What has been missing from the market, however, is a solution in which all companies and devices can collaborate and communicate through a single digital representation of the physical world. This is the opinion of Michael Harrell, VP of software engineering at TomTom, a company known for its location technology, its SatNav-style location devices and — it hopes, soon perhaps — for its progressive approach to location data.
The company is now looking to make the case for bringing private and public location data together. This mission hopes to create a new ecosystem for everyone to share and work on a better world map.
“Until now, technology companies had to purchase their base map, services and added features from a single mapmaker,” Harrell said. “However, each of these mapmakers produces a proprietary map with little in common, making it difficult or impossible to mix and match the best services and features from multiple mapmakers and third parties. Most importantly, innovation is determined by the mapmaker and the resources they are willing to spend on moving their solution forward.”
Many technology companies that rely on richer and smarter maps to foster innovation have considered creating their own maps. However, this comes at the cost of billions to be successful without creating core differentiation for their business.
Some organizations have instead turned to open mapping solutions like OpenStreetMap. OSM has grown tremendously in the past few years, producing and maintaining a visually attractive map with a wealth of detail, but Harrell noted that OSM has also presented challenges due to slower quality checks, what he called inferior routing, inconsistent standardization and a limited ability to use automation.
These challenges have resulted in some companies only using OSM for their secondary and tertiary markets. Given that one’s approach to building the edge-enabled planet should be equitable and unwavering, this is not a positive.
“The ideal solution for mapmaking, therefore, relies on combining the best of proprietary mapping and open mapping,” Harrell proposed. “This requires innovative mapmakers to leverage artificial intelligence and machine learning capabilities to check and standardize open data before merging it with proprietary mapping data such as sensor-derived observations, probe data and thousands of other sources.”
Proprietary map providers have the capabilities and expertise to identify quality issues. If something isn’t quite right, data can be quarantined, cross-referenced against other sources and corrected accordingly.
At the same time, technology companies will still have the freedom to contribute to and improve open mapping solutions like OSM without being constrained to the priority decisions of a single mapmaker.
“Proposing such an ecosystem that uses an open base map and standardization — while enabling any company to associate, license and commercialize added content through independent map layers — would foster efficient collaboration and data sharing across a huge volume of companies and devices,” Harrell said. “Companies from across the world, big and small, would be able to collaborate and gain capabilities from the base map while licensing additional content and capabilities they need. This in turn will free up company resources to focus on innovation that is specific to their customers.”
Digital device democracy
In this theory, everyone and every device could collaborate and communicate through a single digital representation of the physical world, as well as add further detail and proprietary information to support their unique customer needs.
“While the combined resources of the world will significantly accelerate mapmaking, it will also enable technology companies to worry less about making their own maps and solely focus on turning location data into something useful — having the space, time, funds and resources to innovate, spur growth and remain competitive,” Harrell said.
This appears mostly to make sense: Use a mix of proprietary mapping data alongside open data to make it a better-mapped and safer world for all. There’s that great example of a cliff walk near the ocean — where walkers were encouraged to post images of cliff erosion to social media every time they walked past a sign encouraging them to do so — known as the CoastSnap project. This open data is combined with governmental data and some proprietary sources for a better planet.
The theory seems to hold water. Perhaps we’ll one day say “location, location, open collaborative data” all the way to the edge of the map.