Ten years ago, Jeet Samarth Raut’s mother was told she was cancer-free by her doctor. But it was a false negative.

The mistake, Raut believed, was partly because they were in a small town in rural Illinois, and the radiologist was not specialized. Another part of it, he said, could be attributed to the fact that it was a rare kind of breast cancer. And part of it, Raut said, was due to the fact that medical scans are often difficult to read.

The experience with his mother “highlighted the fact that depending on where you live, the radiologists reading your scan have highly varying levels of knowledge.” To help doctors more accurately diagnose illnesses, Raut and another student at Columbia, Peter Wakahiu Njenga, founded Behold.ai–an AI program to help identify abnormalities in medical scans.

Behold.ai is not trying to replace what a radiologist does. Instead, it’s meant to make scanning images faster and more accurate.

“It’s like when you write an essay and there’s a spell check to make sure that you catch the errors,” Raut said.

SEE: AI app uses social media to spot public health outbreaks

And the number of images that need to be scanned are continuing to grow. According to Njenga, the number of scans have been increasing by 14% to 26% per year since 2006. In 2006, there were 180 CT scans and 72 MRIs per 1,000 people worldwide. The numbers, Njenga said, “show the need for some kind of automated system to parse over the images in order to help the radiologists cope with the volume.”

Behold.ai’s system works by looking at images and giving doctors suggestions, based on learning from similar medical scans. “Computers have become increasingly adept at figuring out objects and images,” said Raut. “There’s the Amazon Fire phone, which can scan a picture and if it’s a product on Amazon, it will find it for you.” And Facebook, he said, can see a photo and tell who that person is.

See: Chopra: Health care is about to get a lot more entrepreneurial, data will improve personalized health

“There’s a lot of advances in facial recognition that we wanted to adapt to medicine,” he said, “because it’s about determining where the nodules, aneurysms, and things like that are.”

Through partnerships with hospitals, Behold.ai is using data sets from real patients to ensure that the reinforcement learning system has quality data (they don’t want to make a mistake like Microsoft’s chatbot Tay, who learned how to “chat” through conversations with people on Twitter).

AI currently has various applications in medicine. It can be used to scan text records to find out what’s wrong. And there are also predictive approaches, which use demographics to determine possible illnesses. AI is also used to analyze social media posts to determine health risks.

Behold.ai is, undoubtedly, one of the smaller players in the field. The giant, Njenga said, is IBM Watson.

Still, both big and small players alike face hurdles. In order for hospitals to use software like Behold.ai, it must first pass FDA safety regulations, which no one, including IBM, has done. And while computer-aided diagnostic tools have been used for decades, the technology has changed.

SEE: New IBM Watson app to predict low blood sugar in diabetics

But Njenga and Raut think the software will have the ability to transform the way diagnoses are made.

Many radiologists they spoke to face a lot of pressure, often being paid based on how quickly scans are read. They are looking for tools to help. It can also save hospitals money. Currently, many protocols call for “double-readings,” requiring two radiologists to agree on a scan. This can take hours, or even days, whereas Behold.ai’s system can take seconds.

“To be fair,” said Njenga, “many radiologists can determine the result of a scan that fast.”

“In big cities, that can be the case. But there are other places where a radiologist like that does not exist.”

Also see…