Robots may soon be able to emulate human understanding, as researchers from IBM and MIT partner to use models of the human brain to advance machine vision.
On Tuesday, IBM Research and MIT’s Department of Brain & Cognitive Science announced a multi-year joint effort to develop cognitive computing systems that can copy a human’s ability to understand multiple sources of audio and visual information at once.
For example, a person can watch a video, and then describe what happened in it and also make predictions about what might happen next. While this is simple for a human, it remains incredibly difficult for a computer to achieve, researchers said in a press release.
The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C) will open this month in Cambridge, MA at both the IBM Research Center and on the MIT campus. Researchers will attempt to program machines that can recognize patterns and make predictions based on things they see and hear.
SEE: Survey: Does your company have a use for AI or machine learning? (Tech Pro Research)
MIT Professor Jim DiCarlo, currently the head of the Department for Brain & Cognitive Sciences, will lead the BM3C. Researchers from both institutions will build on the IBM Watson platform to make further AI advances.
“Our brain and cognitive scientists are excited to team up with cognitive computing scientists and engineers from IBM to achieve next-generation cognitive computing advances as exposed by next-generation models of the mind,” DiCarlo said in a press release. “We believe that our fields are poised to make key advances in the very challenging domain of unassisted real-world audio-visual understanding and we are looking forward to this new collaboration.”
Computers with advanced machine vision capabilities could eventually be used in healthcare, education, and entertainment, according to a press release. “The vision is that this integrated cross-disciplinary research will lead to advances that are likely to change both our personal and professional lives–from helping clinicians improve elderly and disabled care to helping organizations maintain and repair complex machinery as well as a host of cross-industry applications,” the press release stated.
“In a world where humans and machines are working together in increasingly collaborative relationships, breakthroughs in the field of machine vision will potentially help us live healthier more productive lives,” said Guru Banavar, chief scientist of cognitive computing and vice president at IBM Research, in a press release.
IBM has become increasingly interested in the potential of AI, and in August, presented the White House with a lengthy response to request for information on preparing for the technology’s future.
While AI developments have already helped researchers in a variety of fields, it’s important to remember that we are still in the early stages of tapping this technology’s full capabilities, said Banavar in a blog post on Tuesday. Banavar also announced new partnerships with five other universities to further developments in optimized systems, cybersecurity, conversational technology, and deep learning, among other subjects.
IBM also provides courses on various cognitive computing topics to more than 250 universities, allowing students to access Watson technology.
The 3 big takeaways for TechRepublic readers
- IBM Research and MIT announced a new partnership to use models of the human brain to develop AI systems that can understand multiple sources of audio and visual information for the first time.
- The Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C) will open this month, with researchers trying to program machines to recognize patterns and make predictions based on audio and visual information presented.
- IBM is also partnering with six other universities to advance AI research on a variety of topics.