A major driver of big data projects is trends prediction — whether it is for retail consumer sales, healthcare wellness trends, climate change outcomes, or energy consumption patterns. Some companies take their big data trends seriously; they seek answers from long-term big data analyses that evaluate not five or 10 years, but 50 to 100 years. A lot can happen in a century — or even in a year! Ground water evaluation is a good example.

Depending on what is happening underground, the character of water can change from year to year. Water can suddenly begin to carry sulfur or differing degrees of iron, magnesium, or other minerals. This can affect the quality of drinking water, as every rural landowner knows.

It is no different with long-term business trends.

For instance, it might be easy to predict that certain areas of the world will be exposed to more natural disasters because of global warming, but what if an unexpected climate shift occurs that no one imagined? Or, it might seem straightforward to predict from projections of today’s data trends that childhood obesity will create more stress on the healthcare system, but what if new treatments are developed, and this problem substantially diminishes?

The emerging challenge for businesses as they manage their big data trends information is determining each trend’s “shelf life.” For instance, if long-term climate trend is actionable in project planning today, at what point do you revisit the trends analysis to verify that the projects you have forecasted for your pipeline are still worthy?

In the “old days” of batch reporting that was based on transactional data, business decision makers made decisions on how often they needed to revisit short, mid-range, and long-term trends. They did this by organizing their batch reports into daily, weekly, monthly, quarterly, and annual reporting cycles, depending on how long the trend the information was reporting on could be considered current. The job of classifying trend “life cycles” in batch reports was considerably easier from what it is shaping up to be today with big data analytics.

For one thing, the outermost trend horizon for most batch reporting life cycles was one year. But if you’re a city planner or a large scale home and apartment developer, you’d likely want to use your big data and analytics to look out over the next 50 to 100 years. You might want to know: What happens if an influx of a new, young population occurs unexpectedly that your analytics haven’t accounted for? What kinds of homes will populations in future timeframes want? What needs (young, old, middle-aged, elderly) will your population demographics likely demand 50 years from now?

These are the trend adjustments and corrections that data statisticians and engineers in the academic arena have been grappling with for decades. As industry gets into the same validations of its long-term big data analytics, it has an additional need of ensuring that the data scientists who work on its analytics also have practical business experience that can factor into the decisions to throw out some trends information while retaining others.

Accordingly, a vital step for most companies using long-term trends information garnered from analytics will be to determine how long they can expect the analytics for each trend to remain accurate. Most organizations have not yet moved forward into long-term trends usage and decision-making, but as more do, they will also need to apply their own analytics to the longevity expectations for the trends data they capture. It is determinations like this that sit on the doorstep of the next frontier of big data analytics.