Is there an exact quality standard that fits all analytics? No, but there are ways to decide what works for each project.
Analytics identifies and defines problems, extracts key information from data and recommends ways to solve the issues. What works in one context doesn't necessarily apply in another, so analytics is nothing like the black-and-white quality testing that is performed on transactional systems, where a specific result is either correct or it isn't.
SEE: Electronic Data Disposal Policy (TechRepublic Premium)
This makes obtaining quality results from analytics all the more challenging because to a degree, you have to make a subjective judgment on whether you are obtaining quality results or not.
How do you really know?
The quality standard for most analytics is that they must be within 95% of accuracy when compared to what subject matter experts would assess. For example, if you are in a medical lab and evaluating a tissue sample, the analytics must come within 95% accuracy of what an expert radiologist would diagnose.
The only way you can obtain this degree of accuracy is by running analytics alongside thousands and thousands of radiology results that were correctly analyzed by expert radiologists and seeing how closely the analytics arrived at the same results. If you reach a degree of 95% or better accuracy, the analytics have been sufficiently refined and tuned and are ready to be deployed in production so they can interpret X-rays and MRIs. However, even then, the hospital will want an expert radiologist's ultimate opinion on what the analytics have evaluated.
SEE: Snowflake data warehouse platform: A cheat sheet (free PDF) (TechRepublic)
The analytics quality process is no different in logistics, manufacturing, finance or market research. Typically, the outcomes of an analytics application must be within 95% accuracy of what subject matter experts would deduce. Until the application reaches that 95% threshold that the industry seems to accept, it can't be fully deployed (or trusted) in production. Or can it?
"When evaluating the sentiment (positive, negative, neutral) of a given text document, research shows that human analysts tend to agree around 80-85% of the time," said Paul Barba, chief scientist at Lexalytics, which provides sentiment and intent analysis to companies. "This is the baseline we (usually) try to meet or beat when we're training a sentiment scoring system. But this does mean that you'll always find some text documents that even two humans can't agree on, even with their wealth of experience and knowledge."
Use cases make the difference
The message is that the degree of quality you set for evaluating the soundness of your analytics and going ahead with deployment depends on the degree of accuracy that actual experts on the subject matter exercise themselves. In some cases, the degree of accuracy might be more. In others, it will be less.
What IT must do, then, is to evaluate the use case for each analytics application together with the degree of precision that is needed. If the use case is analyzing the results of an X-ray or an MRI, accuracy must be extremely high. If the analysis is working on a less-precise use case, such as gauging human behavior and sentiment, the level of accuracy is apt to be lower. In all cases, it is paramount that IT/data science and end users agree upfront what the degree of accuracy must be before any analytics application is developed and deployed.
- Geospatial data is being used to help track pandemics and emergencies (TechRepublic)
- Akamai boosts traffic by 350% but keeps energy use flat thanks to edge computing (TechRepublic)
- How to become a data scientist: A cheat sheet (TechRepublic)
- Top 5 programming languages data admins should know (free PDF) (TechRepublic download)
- Data Encryption Policy (TechRepublic Premium)
- Volume, velocity, and variety: Understanding the three V's of big data (ZDNet)
- Big data: More must-read coverage (TechRepublic on Flipboard)