Innovation

Google's Soli gets real-life image recognition capabilities with RadarCat

St. Andrews University researchers have used Google's mini-radar, Soli, to perform advanced object recognition and categorization.

radarcat.jpg
Image: Google

RadarCat, a new object recognition technology from researchers at St. Andrews University in the UK, could improve retail checkout processes, make waste sorting easier, and even help the blind.

The RadarCat tool was built using Google Soli—a miniature radar that was originally designed for gesture control tracking—which first appeared at the 2015 Google I/O developer conference. St. Andrews was among the first wave of organizations to receive the Google Soli AlphaKit in 2015, to which they added their own recognition software to build RadarCat, according to a press release.

The name RadarCat is short for Radar Categorization for Input & Interaction. The proprietary software that the university built uses machine learning to help "train and classify different materials and objects, in real time, with very high accuracy," the release said.

SEE: How Google's gesture control technology could revolutionize the way we use devices

Basically, that means it can tell a user what object or material is placed in front of it. Additionally, the system also recognizes transparent materials and different body parts. For example, in one video it correctly identified the difference between a person's calf, and their pant leg.

Being that RadarCat can determine whether or not a glass is empty, it could potentially be used to signal a waiter that a patron is in need of a refill. It can also tell the difference between materials used to build an object, such as separating a porcelain plate and a ceramic plate. Since it is able to achieve this level of detail, RadarCat could eventually be used to help visually-impaired users better separate similar objects.

The level of detail possible could also help RadarCat upend traditional retail checkout systems. In a demo video, RadarCat is able to tell the difference between an apple and an orange. If it were to advance enough, this could disrupt the labeling industry as well.

Project Soli was originally developed by Google's Advanced Technology and Projects (ATAP) group. As noted, its original intention was fine-tuned gestures control, as radar is highly-positional, and can read gestures well.

Of course, the gesture control is still on the table, as RadarCat is just one expression of the Soli technology. Previously, experts had posited that it could end up in connected cars and AR or VR applications as well.

The 3 big takeaways for TechRepublic readers

  1. St. Andrews University researchers used Google's Soli and machine learning to create an object recognition system called RadarCat.
  2. The system could be used to help visually impaired users differentiate between objects, or enhance retail checkout systems.
  3. Soli was originally intended for gesture control, and could still be used for that as RadarCat is one application of the technology.

Also see

About Conner Forrest

Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.

Editor's Picks

Free Newsletters, In your Inbox