IBM Watson, Big Blue’s cognitive computing division, announced Tuesday that it was expanding the availability of Watson APIs with three new tools: Tone Analyzer, Emotion Analysis, and Visual Recognition. IBM Watson is also updating their Text to Speech (TTS) feature and rebranding it as Expressive TTS

The announcement was made at IBM’s 2016 Interconnect Conference in Las Vegas, Nevada. The APIs will help further developers’ abilities to leverage the emotional and visual analysis capabilities that are becoming a foundational piece of cognitive computing.

“These latest API enhancements will enable developers to create cognitive solutions that offer expanded capabilities and reflect the different sensory dimensions of the human condition,” said IBM Watson vice president and CTO Rob High.

In a press release, David Kenny, general manager of IBM Watson, said the Watson updates serve to help the “community create dynamic AI infused apps and services.”

SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)

Many modern businesses rely heavily on text-based communication, and the tone of that text can influence how it is received and potentially affect the business. Watson’s new Tone Analyzer, released in beta, gives users a better look at the tone of their text communication by identifying emotions present, social leanings, and writing style.

Tone Analyzer can look at full sentences and identify emotions such as joy, disgust, fear, and sadness. It can also tell if the writer is coming off as open or agreeable, or if he or she seems confident based on their style of writing. It then outputs as hierarchical JSON data. This tool could be useful for speech writing, understanding your emails before sending, or helping a salesperson best word a presentation.

The second beta tool that IBM Watson announced was Emotion Analysis, part of the AlchemyAPI suite of APIs. Emotion Analysis uses natural language processing (NLP) to analyze outside input from customers and partners. It can analyze text samples up to 50KB in size, identifying emotions like anger, disgust, fear, joy, and sadness.

Users input the text itself, the HTML where it resides, or just a link to particular web page and the tool will go to work. The data can be returned in JSON or XML, depending on what your business needs. Organizations can use this tool to determine how customers are responding to a new product or marketing campaign.

Visual Recognition is just what it sounds like–a way to identify the subject of an image. However, it goes a bit further than that. Typical visual search systems are built with pre-defined, or fixed, image classifiers or search terms. Visual Recognition can actually be trained by developers to identify images based on customer classifiers.

One example offered by IBM in a press release was a retailer creating tag specific to a new pants style in its latest line so it can identify when someone posts an image on social media of themselves wearing the pants. The analysis outputs as JSON.

SEE: IBM Watson: What are companies using it for? (ZDNet)

Text to Speech (TTS) is the capability of inputting text and having a computer “speak” for you. By adding “emotional IQ” to its existing TTS feature, it can understand the emotions of the words a user inputs and can add the proper inflection to its speech. IBM is calling this Expressive TTS and it is now generally available.

IBM also announced minor updates to its SDKs, and the introduction of Application Starter Kits to make it easier to build Watson apps. Interested developers can find the three beta APIs and the Expressive TTS on IBM’s developer marketplace on Bluemix.

High said that the Watson developer community is more than 80,000 strong, and it a big part of advancing the Watson ecosystem.

“The developer community has introduced cognitive computing and artificial intelligence-infused apps to market at a rate that has tripled in less than a year,” High said.

The speed at which IBM seems to be not only updating its Watson platform, but making it more open and available to developers is interesting. It’s clear that they are trying to make Watson the de facto platform for the future of AI business applications.

The 3 big takeaways for TechRepublic readers

  1. IBM released Tone Analyzer, Emotion Analysis, and Visual Recognition as beta APIs, which will give developers a broader range of AI capabilities to play with as they build Watson applications.
  2. The tools are focused on analyzing both text and image inputs. Businesses can use these capabilities to take a look at the emotional content of their own messaging, better customer responses and comments, and track and identify images based on their own parameters.
  3. IBM’s new APIs are available on Bluemix, and it seems to be continually making the Watson platform more and more open. It’s possible that IBM could be positioning Watson at the core cognitive computing platform for the enterprise.