April 23, 1985 is a day that “will live in marketing infamy,” according to the Coca-Cola Company. At the time, Coca-Cola’s marketing research indicated that consumer tastes in cola were changing, so the company decided to change its 99-year-old “classic” Coke formula. According to Coca-Cola, what research didn’t show was “the bond consumers felt with their Coca-Cola — something they didn’t want anyone, including The Coca-Cola Company, tampering with.” The end result was a return to classic Coke by July 1985.

Big data initiatives are not too far from the New Coke vs. Classic Coke story. In fact, it would be worthwhile to take another look at some of the classic IT methodologies and repurpose them for current big data situations.

1: Data retention

Dating back to the 1970s, data retention was an IT codeword for sitting down with different end user areas throughout the company and determining when data could be trashed. In some cases, IT monitored data for last access and set a time limit on how long it would be retained in active data repositories before it would be purged or archived.

Today’s big data is also in serious need of a viable retention strategy within every company that uses it. Until this happens, companies are at a loss at how to best manage this burgeoning mountain of data.

2: Batch reports

The majority of big data is run through Hadoop and other big data engines that rely on parallel processing in batch. When the analytics on this data are finally completed, they appear in what they were 30 years ago: batch reports.

In the classic IT days, IT periodically went through its batch report catalogue and eliminated or archived the reports that were seldom used. Companies don’t have that kind of history yet in most big data batch reporting, but they will reach a point where big data jobs and reporting should be periodically reviewed — with decisions made on which analytics are germane to company business, and which should be retired.

3: Data validation

Some of us remember the data validation routines that were written into applications to verify that data was “clean” and in the correct format before it was committed to a transactional database. Big data is no different.

If personnel in a call center are being rewarded based upon how many callers they can handle in an hour, there will be little incentive for them to accurately or completely fill out customer records in a CRM system. Big data harvests from these types of systems should use automated cleaning processes to help correct the broken data.

4: Usability

Dashboards, reports, and drill-down analytics tools should be designed ergonomically for the people who use them, and not by engineers and programmers. This human factors challenge is the same today as it was when online screens and batch reports were designed years ago. The difference today is there are many more tools that business analysts and end users can use to put the data in formats that are most useful to them.

5: User engagement

The beauty of most big data projects is that businesses and IT are working closer together than ever before so they can get the data and the analytics right. In past IT eras, there wasn’t always the same cooperation, which caused costly communication breakdowns and waste; it was easy to miss the finer points of user comfort zones and usability, just like brand (and taste) loyalty was missed in the New Coke example.

Engaging users is still an area that IT must work on with big data, but greater collaboration and in-depth understanding of user wants and needs is creating opportunities to make the “new” ways work and leave the classic ways that didn’t work behind.