Most organizations have a set of big data best practices they have formulated from their successful project work. An equally important list is the pitfalls that organizations should stay away from when it comes to big data and analytics. Here are six don’ts to keep in mind during your big data projects.

1: Swing for the fences

The most successful big data initiatives build a strong foundation for big data and analytics and use them. The best way to do this is by creating a constant path of new big data deliverables that incrementally and continuously improve the organization’s ability to tackle strategies and operational issues with richer and better data.

2: Make things unnecessarily complicated

Dashboard and spreadsheet-style data delivery that also give the end business user the ability to drill down into data and ask more questions work exceptionally well. A big reason why is users are already familiar with these types of data capture and manipulation tools.

The more at ease that users are with the tools they use to access and manipulate data, the more they will believe in and adopt big data and analytics.

SEE: Cloud Security Alliance releases top 100 big data best practices report

3: Bypass security as a project consideration

Security is one of the largest missing pieces in big data projects. These are some security questions to consider.

  • What types of security and risks are in play when big data begins to get captured and moved off machines at the edge of the enterprise–or from even outside of it?
  • How secure is your internal data preparation environment, and do only authorized users have access to it?
  • For the many types of unstructured data, how do you vet this data to ensure that it is tamper-proof?
  • If you are aggregating data from third-party vendors, what levels of security and governance do they use for their data? Because there are so many diverse sources and types of big data, security of this data is still an area that most enterprises are struggling with.

4: Pursue projects without end user engagement

If you don’t know the critical questions that areas of the end business want to solve with big data, you can’t deliver the solutions. Engage heavily with end users about the nagging questions in the business, and collaborate with them as you strategize how to obtain and extract information from big data.

SEE: Big data without the big headaches: How to get your strategy right (ZDNet)

5: Work with low-confidence data

If data isn’t properly cleaned and vetted for accuracy, the results it could indicate might be erroneous and catastrophic to your company.

Remember the New Coke initiative of the 1980s? A market research team conducted over 200,000 taste tests that showed participants preferred New Coke over both Classic Coke and Pepsi. Coke soon learned that taste preference wasn’t the only factor that went into consumer purchasing decisions–tradition was also a major factor.

In the end, the $4 million that had been sunk into New Coke development was wasted, as was another $30 million of New Coke syrup that sat on the shelf while Coke restored its Classic Coke.

SEE: Skip the New Coke approach: Recycle classic IT methods for big data

6: Limit your innovation

Big data projects, like all projects, must return value and show results. Consequently, those leading big data projects tend to focus on areas of low-hanging fruit where they know they can produce results quickly.

It’s also important to keep some experimental big data work going. Why? Because experimental big data work–where there is no immediate pressure of timelines or results–has the potential to produce breakthroughs.

To pursue big data innovations, you need buy-in from the CEO and other corporate managers because these projects also have high failure rates that everyone must accept. You also need a mechanism within these R&D projects to pull the plug as soon as you see they aren’t going to produce results.