The annual planning cycle resets many things, including your performance targets. If your company is like most, your targets are either too aggressive to be believable or too conservative.
For instance, you might look at your gross revenues for last year and decide you'll do 10% or 15% more this year, but how do you know it shouldn't be 20% or even 100%? If you're not sure, you might ask your data scientists for help.
Year after year strategists leave large amounts of potential revenue, costs, and operational efficiencies unexplored because they don't have good methods for establishing better targets; however, given the right direction, your data scientists do. If you have data scientists currently working on solutions involving big data analytics, you might task them with figuring out your theoretical best performance targets.
SEE: Job description: Data scientist (Tech Pro Research)
Understanding your theoretical best
Once you've decided the metrics that will gauge your success—your key performance indicators (KPIs)—it's best to establish targets based on the concept of a theoretical best. As its name implies, a theoretical best performance target is not anchored to past performances, industry benchmarks, or unsubstantiated aspirations; a theoretical best performance target is a carefully calculated target based on what should be possible.
There have always been good techniques for calculating a theoretical best target, but the age of big data analytics brings with it the opportunity for a lot more accuracy.
Calculating theoretical best targets for your KPIs and other measures should be one of your top priorities. Without the rigor of data science, you run the great risk of getting them wrong. For instance, if you set your sales goals too low, you'll limit your revenue potential; but if you set sales goals too high, you'll frustrate and demotivate your sales team. This situation can apply to any measure that's important to your business: cost, safety, quality, reliability, etc. Getting your targets right involves data science with both big data and analytics.
Using big data to calculate your theoretical best
The fact your data is big plays a significant role in calculating theoretical best. Big data implies some combination of high volume, high throughput, and a variety of data forms. All three aspects contribute to the accuracy of your theoretical best calculation.
When siloed data is consolidated and aggregated, it becomes a valuable resource in understanding what's possible. Operational Intelligence is a good example of this. When you combine operational data from a single processing facility with data from similar process facilities around the world, it broadens your perspective. A plant manager in Argentina might feel like her plant performance is good, but a quick review of how Germany is doing might change her mind.
High throughput adds value as well. Before big data, we were sampling data that was moving too fast to process. This is better than nothing, but sampling comes with error and confidence issues. Big data techniques for processing fast-moving data allow us to include all the data in our analysis. It's common to uncover gems in the data that demonstrate possibilities that you never imagined, when every data point is at your disposal.
Finally, big data provides the opportunity to process data in unconventional forms. Unstructured data that lies in documents, videos, audio, and social media streams contain a wealth of information that was never available before. Like high volumes and high throughput, the opportunity to process unstructured data in non-traditional formats can provide valuable insights about what your theoretical best looks like.
SEE: Free ebook—Executive's guide to IoT and big data (TechRepublic)
Using analytics to calculate your theoretical best
Sophisticated analytic techniques must be used to calculate your theoretical best once this wealth of data has been made available. There are a variety of algorithms at your data scientists' disposal, both traditional and non-traditional. Large populations of past data provide statistical methods with a very high degree of confidence. And neural networks, genetic algorithms, and all sorts of novel analytic techniques can be applied to the data as well. Once you provide a little guidance, your data scientists won't have any problem with how to analyze and calculate theoretical best performance targets.
The most important piece of direction is to forego the notion of using past performance to predict future performance—this is exactly what you're trying to avoid by pursuing a theoretical best target. Instead, it's best to focus on leading indicators. KPIs are typically lagging indicators. Revenue, for instance, is an effect of other causes like good sales practices and customer service. If, instead of focusing on revenue, your data scientists focus on the metrics that cause revenue, then a more accurate projection of your theoretical best revenue can be calculated.
Smart companies don't rely on the past to set future targets—they calculate theoretical best performance targets. And the technology and disciplines surrounding big data analytics bring possibilities with this calculation that never existed before. Big data greatly increases the availability of relevant data and advanced analytics provide higher accuracy.
Even if you've already established performance targets for the year, it's not too late to make adjustments—talk to your data scientists today to see if they can help. Until then, you won't know what's possible.
- 2017: The year data science will live up to its potential (TechRepublic)
- How to make yourself a DIY data scientist (TechRepublic)
- Data scientist: Your mileage may vary (TechRepublic)
- Is machine learning icing on the cake for data scientists? (ZDNet)
John Weathington is President and CEO of Excellent Management Systems, Inc., a management consultancy that helps executives turn chaotic information into profitable wisdom.