With all of the hoopla about Hadoop, the reality is that it is still in early stages of big data adoption at many organizations.

In September 2014, Frank Buytendijk, research vice president at Gartner, stated about Hadoop that “Adoption is still at the early stages with less than eight percent of all respondents (to a Gartner survey at that time) indicating their organization has deployed big data solutions.”

Survey data from Gartner in 2015 continued to support the big data early adoption theory, specifically as it related to Hadoop. In the 2015 survey, only 26% of respondents said they were deploying, piloting, or experimenting with Hadoop, and another 11% said that they planned to invest in Hadoop in the next 12 months.

The slower than expected adoption of Hadoop for analytics can be attributed to several factors:

  • It requires a data scientist’s skillset to get Hadoop analytics running;
  • Hadoop access is limited to a finite set of users;
  • Corporate executives have a hard time seeing the business benefits of Hadoop analytics, which remain a highly abstract concept; and
  • Businesses are having a difficult time translating outputs from Hadoop analytics to tangible business results.

“Hadoop was not originally designed for speed or a high level of security,” said Bruno Aziza, Chief marketing officer of AtScale, which serves the business intelligence space. The platform also wasn’t originally designed with ease of use for end users in mind.

Aziza and AtScale promote the use of an online analytics processing (OLAP) cube that sits on top of Hadoop and acts as a logical and more user-friendly way of providing access to data. The OLAP cube does this by storing Hadoop data in a multidimensional form that can be used for a variety of different reporting, and that is much more amenable to analytics reporting than data access structures like data marts or data warehouses.

“The OLAP cube places a semantic layer on top of Hadoop that provides additional security as well as speed of access,” said Aziza. “It’s a tool that can work well with established analytics reporting software like Tableau and Excel, and it renders access to data easier for end users.”

The hope is that more end users will discover that they no longer have to be data scientists, which in turn will facilitate more active use of big data reporting capabilities to solve business problems. As ease of use improves, business users will be able to use the analytics reporting in an on demand, self-service mode.

“Self service is an important differentiator,” noted Aziza, “As sixty percent of the enterprise respondents that we recently surveyed for a big data study told us that their organizations had not yet achieved self service access to big data. For those who do have self service, there is a 50 percent greater likelihood that they will get business value out of big data.”

Aziza also said that the chances of an organization reaping business benefits from its big data and analytics increased by another 10% when a major C-level executive, like the CEO, CMO, CIO, or CDO (chief data officer), stepped out to champion the cause of big data/analytics. From AtScale’s research, over one half (52%) of this championing for big data applications was likely to come from marketing/sales and revenue gaining goals, with another 35% from operations, and 13% from HR and other company areas. Getting the participation and building the enthusiasm around big data (and big data investments) was largely driven by ease of access to data.

“Over the last ten years, companies have spent their money on dashboards, scorecards and other types of big data and analytics reporting, and on Hadoop itself,” said Aziza. “It’s now time that they look at a semantics data infrastructure that can bring the data from Hadoop into this assortment of end reporting software cleanly and rapidly.”