Mobility

Google Awareness API now tells Android apps where you are and what you're doing

Google recently launched its Awareness API, which provides the ability to add more contextual awareness to Android apps. The increase in convenience and efficiency has obvious trade-offs in privacy.

awareness.jpg
Image: Google

Smartphone applications for both Android and iOS have been using data from the device to add more context to apps for quite some time, but it was usually disparate and each data point had to be accessed separately. Now, Google is bundling "context" signals in a readymade API for developers.

On Thursday, at the 2016 Google I/O conference, Google announced the Google Awareness API. The Awareness API provides more context around the app user to that the app can respond more intelligently.

SEE: Mobile app development policy template (Tech Pro Research)

Awareness collects and aggregates data from the smartphone's sensors to provide seven distinct data points:

  1. Current local time
  2. Location
  3. Specific place and type of place
  4. Activity, such as walking, running, or biking
  5. Nearby beacons and their content
  6. Whether headphones are plugged in
  7. Current weather conditions

The Awareness API is actually made up of two separate APIs, a Fence API and a Snapshot API. The Fence API allows developers to set the app to respond to specific situations the user may be in, or when certain conditions are met. For example, the Fence API can be set to alert an app when the user plugs in his or her headphones and begins walking, so that it might be able to suggest a playlist.

The Snapshot API is the part of Awareness that requests user information, such as their current location and activity.

Of course, access to the sensors has been readily available to developers, but Google is making it easier than ever to collect and use this data with the Awareness API. This ups the ante for develops as their competitors race to make their apps more intelligent.

Awareness also has implications for marketers and advertisers who now have a potentially easier path for delivering more relevant in-app ads, and pushing promotional material to consumers.

Contextually-aware, smart services were a key component of Google's keynote address on Wednesday as well, when it announced the new Google Assistant (replacing Google Now) and Google Home, its smart home hub.

In fact, Google seems to betting big on technologies like automation and machine learning—trying to fold AI capabilities more into our everyday lives. The question then becomes: Is that something we really want?

SEE: AI, VR, messaging, and wearables: Everything you need to know from Google I/O 2016 (TechRepublic)

Of course, the major quandary created by Google's new products is how much of a hit user privacy will take with tools like Google Assistant and Awareness. Google has enabled end-user permissions for each of the seven context signals available through Awareness, but that could make for poor app experiences if the apps come to heavily rely on this context.

The 3 big takeaways for TechRepublic readers

  1. Google announced its Awareness API that gives developers a bundled set of data points available from smartphone sensors, which could enable more contextual and intelligent apps.
  2. AI and machine learning are a growing aspect of Google's business, and the company will likely continue its push toward personalization that we've seen with Google Assistant and Google Home.
  3. Awareness creates some interesting questions around the cost of convenience for smartphone users, and whether or not they will have to sacrifice privacy for usability.

Also see

About Conner Forrest

Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.

Editor's Picks

Free Newsletters, In your Inbox