Researchers from the MIT-IBM Watson AI Lab, Tulane University and the University of Illinois this week unveiled research that allows a computer to more closely replicate human-based reading comprehension and inference.
The researchers have created what they termed “a breakthrough neuro-symbolic approach” to infusing knowledge into natural language processing. The approach was announced at the AAAI-20 Conference taking place all week in New York City.
Reasoning and inference are central to both humans and artificial intelligence, yet many enterprise AI systems still struggle to comprehend human language and textual entailment, which is defined as the relationship between two natural language sentences, according to IBM.
SEE: paper, the researchers wrote that they are presenting an approach that complements text-based entailment models, which are fundamental tasks in natural language processing, with information from external knowledge sources.
The use of external knowledge helps the model to be robust and improves prediction accuracy, the researchers wrote. They said they found “an absolute improvement of 5-20% over multiple text-based entailment models.”
Sentiment analysis is in use today, Cox said. “A relative understanding of shallow text will give a solution.” But if you read a science textbook and then try to pass a quiz, for example, you need to have a deep understanding of what the data in the textbook actually means.
The team found that infusing knowledge graphs, which are representations of things that are known, with neural networks “was more powerful than any methods that have come before that just relied on neural networks without knowledge graphs,” he said. “This mining of ideas was more effective.”
Cox stressed that the researchers are in the very early stages of research but believes that this is a technology “that we think will impact many industries.”