On July 27, 2018, news broke that collaborative efforts in cancer treatment between the Sloan Kettering Research Institute and IBM were producing instances of treatment prescriptions that were unsuited for the patients the treatments were prescribed for.

In one case, a 65-year-old man was prescribed a drug that could lead to “severe or fatal hemorrhage” even though he was already suffering from severe bleeding

When investigators dug deeper, they found that IBM engineers and Sloan Kettering medical doctors had fed hypothetical patient data to IBM’s Watson, which was processing the treatment analytics. Initial thinking was that the use of hypotheticals (instead of real patient data) potentially skewed the AI and resulted in multiple examples of unsafe or incorrect treatment recommendations.

At first blush, it is easy to slam the AI–but should we?

The most important lesson to be learned from Watson or any other AI technology that is being trialed in business right now is that AI isn’t perfect. And if AI “training” relies on systems engineers and subject matter experts, AI will have a lot to offer–but it also will take years to perfect.

SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

We see the AI limitations in other examples beside Watson. These include:

The automated call attendants and online self-help facilities that give you the answer to everything except what you want to know

The online AI surveys that purport to tell you about yourself and then tell you that you are from the Boston area, although you’ve never been to New England

So if you’re an IT practitioner running an AI project, what current best practices can you use for greatest success in an evolutionary area of technology? Here are some issues to consider.

1. AI is an iterative process that constantly requires human and machine interaction

When an AI system was being trained in China so that data gathered at urban hospitals could be used in the AI for stroke treatments in remote rural areas that might not have trained medical personnel, many patient cases and treatments were entered into data repositories. Analytics and algorithms were continuously re-run and refined until the AI diagnoses came within 99.9% accuracy of what a highly skilled medical doctor would diagnose.

2. It’s important to eliminate analytical biases

Heart disease is still an area of higher risk for women because most heart disease studies have been run on men, who exhibit different symptoms and require different treatments. Heart disease is an example where standard medical practices may be biased and may not be equally effective for all patients. If you’re carrying forward these assumptions into an AI system, your system will also be biased.

SEE: Special report: Data, AI, IoT: The future of retail (free TechRepublic PDF)

3. AI is a continuous learning experience

The idea of electricity might have come about as early as 600 BC, but it took centuries to make electricity a part of everyday life. As we learn more about conditions and events, our body of knowledge grows and we come up with new ideas. AI is no different. AI’s ability to analyze and predict is dependent upon how much it takes in from people and machines and how continuously its algorithms are refined.

4. There is no substitute for skilled practitioners

There will never be a substitute for a highly skilled surgeon, or engineer, or attorney, or mechanic who has first-hand experience with many types of cases that don’t go by the book but still need to be resolved. This is an area of anomaly where AI logic often falls short–and a reason why you still need human practitioners working alongside AI tools.

5. The sweet spot for human-machine work must be found

What AI brings to business is a way to rapidly process volumes of data in seconds and to apply algorithms to the data that come up with hypotheses and predictions. What humans working alongside these machines bring is their practical know-how and experience, which goes beyond data crunching and gets into the domain of the unknown and the unexpected. Finding the right balance between the two for best results is the most important thing an IT project manager can do when AI is inserted into company operations.

 

Your take

Have you been tasked with running an AI project? What obstacles or surprises have you encountered? Share your experiences and advice with fellow TechRepublic members.