Big Data

Big Data challenges and how the cloud can help

Thoran Rodrigues explains the evolution of Big Data processing, highlights the challenges, and considers how the cloud can contribute to the Big Data boom.

For the past few years there has been a lot of buzz around the concept of Big Data, mainly tied in to the rise of cloud computing. Big Data is an all-encompassing term coined to represent the huge volumes of unstructured data that many businesses and most large Internet companies have to handle today. The keyword in this definition is unstructured. Traditional database structures are perfectly suited to handling large amounts of data as long as that data undergoes a structuring process. Big Data, however, has no real underlying structure. It can be tweets, Facebook status updates, web pages, text, or any other similar collection of data.

Traditional tools, such as conventional relational databases, are not very well suited to handling this kind of information. The pioneers in this area were Internet companies, who eschewed the relational database model and ended up creating their own frameworks to work with Big Data, consisting of specialized data structures, NoSQL databases, distributed processing systems, and other elements. Over time, these frameworks were consolidated, and many of them are freely available for use today. They rely heavily on parallel processing, and this is where virtualization, and cloud computing specifically, comes in to help.

In the beginning...

Google was perhaps the first company (and certainly the first web company) to make effective use of Big Data. By collecting and analyzing massive collections of web pages and of the relationships between them (the links), it was able to create the first truly universal search engine, capable of querying and indexing billions of pages without human intervention. Instead of relying on traditional technology - relational databases - Google's engineers created a massively distributed system, and they kept this system low-cost by using off-the-shelf hardware. In developing this system, they gave birth to the current form of several large-scale computing elements, becoming responsible for the widespread usage of the MapReduce framework, of which Hadoop is one particular implementation (interestingly enough, Yahoo, not Google, was one of the biggest contributors to the project).

Hadoop, in turn, has been greatly responsible for the Big Data boom. It is an open-source implementation of the MapReduce framework that enables anyone to perform large-scale distributed computing tasks on any cluster of computers, or even on a single computer. Before the rise of cloud IaaS providers, having a cluster meant a large investment in hardware, even if you were building it based on commodity hardware. Today, however, anyone can quickly spin up a cluster composed of however many virtual nodes in any provider, keep that cluster alive for as long as there are tasks to run, and spin down the nodes when they aren't needed anymore. In fact, several of the largest IaaS providers have started offering pre-configured Hadoop clusters, so that users don't even have to worry about configuration.

As massively parallel architectures become more and more commonplace - both in individual computers, with multiple cores and processing nodes and in distributed systems, which run in several servers - it becomes possible to perform much more complex tasks. Companies of all sizes have access to the computing power necessary to process a lot more data. This generates a positive feedback loop: as end-users believe that they can process more data, they will demand more data collection, which in turn leads to a greater need for data processing, and so on.

Big challenges

Big Data, however, comes with challenges of its own. First, the algorithms necessary to process the data can be much more complex than what we are used to. This, in turn, means that it can be much more complex for end users to customize the systems to fit their needs. And Big Data analysis can be a very difficult task in itself. One key point with having a lot of information is that, as volumes grow, the signal-to-noise ratio worsens: if you could listen to all conversations going on in the world at any given time, how would you choose which ones were meaningful?

Another issue that is often overlooked by Big Data proponents is that correlation does not mean causation. In fact, the more information is being processed, the greater are the chances of finding spurious correlations that have no real meaning, or at least no relationship of causality. If we looked at all possible variables, we would probably "discover" that most people who get sick have worn clothes at some point in their lives, which does not mean that their sickness is caused by wearing clothes.

Finally, the traditional cloud computing challenges must also be considered. If you are running a massively distributed process, how secure are your computing nodes? What would happen if one of these nodes were compromised? In a sense you would be better off, because only part of the data would be stolen. At the same time, there are that many more servers to worry about.

In spite of these challenges, Big Data has the potential to change the way many businesses work, and the cloud provides an excellent low-cost opportunity for anyone to try out the processing frameworks and algorithms. The first thing to consider is if you effectively have a Big Data problem: can your business benefit from using Big Data? Do your data volumes really require these distributed mechanisms? If you do, however, the cloud is your best processing platform bet.

About

After working for a database company for 8 years, Thoran Rodrigues took the opportunity to open a cloud services company. For two years his company has been providing services for several of the largest e-commerce companies in Brazil, and over this t...

2 comments
ittechexec
ittechexec

Don't laugh or snort derisively. I know, telecom and call center are the "low end" of the tech field. But telecoms and call centers have been operating in the cloud for more than a decade. And the issue of Big Data or unstructured data is nothing new. There are a ton of call center technology providers that have been providing "actionable intelligence" from unstructured voice and other data for years. Why not look to a segment that has faced a lot of these problems already. I'm not saying they have handled them all perfectly, but there is definitely a road map that can be followed.

thoran.rodrigues
thoran.rodrigues

The big data issue is indeed nothing new, and indeed we can look to people who have already worked on this to look at lessons learned and how to deal with it. However, if we look at the way that such companies have worked in the past, they have mostly focused on taking the unstructured data and structuring it in order to work with it. The ???new??? idea with big data is that companies should work directly with the unstructured information and this is something that conventional technology is not well suited to. This means that, instead of taking the raw data, structuring and then working with it, we would store only the unstructured information and work with it as needed.