Facebook recently announced its automatic alternative text, which describes the content of a photo as a user moves past it, giving blind users more context for the image.
As it has grown, Facebook has become the de facto digital photo album for many of the more than one billion active users of the site. However, that has made it more difficult for visually-impaired users to engage the platform.
On Monday, April 4, Facebook introduced automatic alternative text, a feature that uses object recognition technology to form a description of a given photo as the user passes over it. While using the Facebook app on an iOS device, the feature would tell the user that the image "may contain three people, smiling, outdoors," according to the official Facebook press release.
SEE: Machine automation policy guidelines (Tech Pro Research)
Many blind smartphone users rely on screen reader software to respond to texts, compose emails, and surf Facebook. As the name would imply, the tool reads the text on a given screen aloud to the user. However, previous iterations could only tell the user that a photo was present, it could not describe the photo or give any context.
So, for example, if a user was scrolling through his or her Facebook feed, the screen reader would read out the person's name who posted the photo and then simply say "Photo." Now, with automatic alternative text, Facebook is hoping it can better describe the content of photos for users who may be blind or visually impaired.
Across the company's properties of Facebook, Instagram, Messenger, and WhatsApp, two billion photos are uploaded daily. And, Facebook said that many of its blind users may feel left out, or that they aren't able to experience the platform the same way as others, which the company is hoping to rectify with this new feature.
As noted, the automatic alternative text feature, also known as automatic alt text, relies on object recognition technology. According to Facebook, its object recognition technology "is based on a neural network that has billions of parameters and is trained with millions of examples."
The current iteration of automatic alt text took 10 months to build and recognizes a set of 100 objects and scenes--what Facebook refers to as "Concepts." These concepts can range from objects such as baby, eyeglasses, and beard; to mountain, snow, and sky; to ice cream, pizza, and coffee. It can also give users a count of the number of people in the image, and whether or not it's a selfie.
SEE: Facebook: Our AI will give everyone 'superpowers' (TechRepublic)
The foundation of Facebook's visual recognition efforts is its computer vision (CV) platform, which allows machines to look into and "see" and image. The CV platform is a part of Facebook's grander ambitions in machine learning, which it detailed at the end of 2015, and hinted at some visual recognition capabilities.
At launch, automatic alt text first will only be available on iOS screen readers set to English in the US, UK, Canada, Australia, and New Zealand. However, Facebook will be adding support for additional languages and platforms in the future.
The 3 big takeaways for TechRepublic readers
- Facebook has launched automatic alternative text, an AI tool that helps blind Facebook users by using a screen reader functionality to speak the objects within a photo to the user.
- Automatic alternative text will be available on iOS screen readers set to English in a few markets to start, but the company plans on expanding its availability.
- Projects like automatic alternative text are a part of Facebook's larger ambitions in machine learning, which will prove to be a key aspect to the company's business in the future.
- AI helpers aren't just for Facebook's Zuckerberg: Here's how to build your own (TechRepublic)
- Salesforce acquires AI startup MetaMind (ZDNet)
- 10 things you need to know about artificial intelligence (TechRepublic)
- Microsoft's Tay AI chatbot goes offline after being taught to be a racist (ZDNet)
- Why AI could destroy more jobs than it creates, and how to save them (TechRepublic)