TechRepublic’s Karen Roby spoke with Ira Cohen, chief data scientist at Anodot, a business analytics platform, about the adoption of artificial intelligence (AI) in healthcare. The following is an edited transcript of their conversation.
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
Karen Roby: We’re seeing the healthcare industry evolving in front of our eyes. There are even plenty of offices that had never even considered telehealth, but have now been just thrust into that. AI is certainly playing a big role in that.
Ira Cohen: I think the pandemic actually was a unique time point. We see it even, we don’t play in an area, I know. When the pandemic started, we started talking to the various health use cases or healthcare providers that will never talk to us, would never even consider it. We said, “Do you have data? Can you give it to us for testing?” “Yeah, sure, here. Let’s get an approval.” Helsinki approval for doing all sorts of testing that before would take two years to do. And this changed in a minute. And I see it with other companies. The healthcare industry is going to embrace AI very quickly, very significantly. A big part of it because of this. It was accelerating before, but now it gave it this huge boost of acceleration. And I think we’ll get the benefit out of it in the next few years. Most definitely.
SEE: 800% surge in VA telehealth visits during COVID-19 pandemic with a boost from T-Mobile (TechRepublic)
Karen Roby: And when you talk to leaders within the healthcare industry, what are their biggest concerns to move forward and to keep progressing? And what it’s going to take to get there? Or do they even understand enough? A lot of people don’t really realize what AI is capable of.
Ira Cohen: I think that the biggest concern is always privacy, for making sure that things continue to become private, and there’s no leakage of information through these AI initiatives. And even when they’re being used that there’s no leakage of information. I would say that’s probably one of the biggest concerns. What’s always pushing them to be skeptical is adoption. They always think, will it be adopted by physicians, by providers? Even if it’s accurate, there’s always this tension between, do we trust this machine to tell us what’s correct or not? And a lot of the AI out there today doesn’t necessarily know well, how to explain why it said something. I tell you that you have cancer, but I can’t tell you why. All your data told us that, but why? And that’s where the gap is in making good use of a lot of the AI that exists today.
SEE: Natural language processing: A cheat sheet (TechRepublic)
But there is a lot of work being done constantly on bridging those gaps. So I would say these are the two things. The privacy, which can lead people to not trust the whole system, and the gap between what AI can really do today. It can be very accurate, but it’s not very accurate at telling you why it thinks something is A or B, why it thinks you have cancer or you don’t have cancer.
Karen Roby: When you talked to these companies, whether it’s a big healthcare organization, a smaller entity, is there such a thing as just dipping your toes when it comes into AI? Or is it a full adoption that you have to go all in?
Ira Cohen: No. Definitely dipping your toes. You can take a small problem and you try to solve it. For example, I’ve heard of trying to do better monitoring of ICU patients, which is very relevant to the epidemic, especially in situations when you have a lot of patients at the same time, much more than normal. Normally, you’d have one nurse per one ICU patient. All of a sudden you get one nurse for five and it changes the game completely. So those scenarios, those are dipping into the waters but those are small scenarios, that you show results and then you move forward from there. I don’t think anybody’s embracing it as a whole thing.