Big Data

How to monitor the apocalypse using big data DevOps

Logz.io CEO explains the advantages and potential pitfalls of DevOps and how the company used big data and AI to create the Apocalypse Index, a real-time chart forecasting the end of the world.

logzkibana.png
Image: Logz.io

Last month the Bulletin of Atomic Scientists ticked the Doomsday Clock thirty seconds closer to the apocalypse. The clock, an index of potential global threats maintained since 1947 by a respected consortium of scientists, enumerates a number of existential threats like climate change, nuclear war, artificial intelligence, and of course, Donald Trump.

Fortunately, if the end is indeed nigh we can monitor the apocalypse using big data DevOps.

With tongue firmly planted in cheek, data management and log analysis platform Logz.io transformed its massive trove of data into an Apocalypse Index, a real-time chart that shows the volume of tweets mentioning President Trump's Twitter handle combined with phrases including end of the world, armageddon, apocalypse, world war iii. The higher the line, said the company, the closer we are to DEFCON.

SEE: Hiring kit: Data architect (Tech Pro Research)

Though the Logz.io's Apocalypse Index is a humorous application of the company's technology, DevOps big data has rapidly emerged as a powerful business technology process, said CEO Tomer Levy. "Startups specializing in DevOps are experiencing a tremendous boom in VC interest," Levy said. Today's venture capitalists see the DevOps movement as a more mature, innovative way to develop and manufacture software that can help startups and enterprises alike become more efficient and agile."

At the heart of DevOps big data are logs. Often associated with corporate compliance and security, logs are forensic metadata of digital activity like social media posts, code commits, server messages, and network activity. A single company, Levy said, could generate millions or billions of log records each year. "When your machines produce millions of logs everyday, distinguishing which messages are critical is like trying to find a needle in a haystack."

logzkibana2.png
Image: Logz.io

With DevOps, metadata can be transformed into a useful product. Logz.io ingests, stores, and uses artificial intelligence to mine piles of logs from social media, surveys, voter records, census data and other public caches, and private data sets. "Once you analyze this data correctly," explained a company spokesperson in a 2016 interview with TechRepublic, "you can learn so much meaningful information about how people talk about and interact with your brand, or the things you care about."

SEE: Big data policy (Tech Pro Research)

Logz.io relies on ELK, an amalgamation of tools developed in part by Elastic.co. Elasticsearch is a JSON-based search and analytics engine, Logstash collects all log data in one place, and Kibana visualizes data in context with a dashboard loaded with graphs, charts, and other visualizations. In tandem with DevOps, Logz.io addresses two major problems IT departments and developers experience in log analysis, "collecting data in a manner that is both manageable and scalable," Levy said, "and then being able to discover data that is actually relevant."

Levy spoke with TechRepublic about the future of DevOps and big data, his favorite open source tools, and how Logz.io uses structured data to extract important insights from metadata.

Can you explain your core technology?

[We are] a log analytics platform which is based on the ELK stack. We take ELK, a set of libraries used by approximately 500,000 companies around the world, and enable our customers to use it in a scalable, secured, and more robust offering in the cloud. We have developed an artificial intelligence engine that can scan through the log data and automatically highlight critical events.

Is DevOps sustainable?

Overall, developers can burn out when the DevOps as a work process has been implemented without changes in culture to keep it consistent. By it's nature, DevOps is fast-paced and requires a large amount of accountability on the part of developers since they are responsible for their code from development to testing and sometimes even up to user support. This puts a lot of pressure on developers, especially if they come from a more traditional, waterfall background.

What are your favorite open source DevOps tools?

It's a difficult choice because there are so many great open source DevOps tools, but if I had to pick I'd say:

  • Docker Over the past couple years Docker has been almost synonymous with DevOps. The containerized system allows you to ship your software to the cloud with everything you need for configuration — including code, runtime, system tools, and system libraries. This way, you can run everything the same, despite differences in environment.
  • ELK The world's most popular open source log analysis platform.
  • Kubernetes This orchestration system for Docker containers schedules onto nodes in a compute cluster and maintains workloads to ensure they're consistent with their declared purpose.
  • GitHub This hugely popular Source Control Management system is ideal for supported distributed systems. But what really sets it apart are its forking and pull request features as well as its ability to configure with Jenkins.

How can companies make the most of DevOps?

To make the most of DevOps, companies need to start small. This way, you go into the process better prepared and have less of a risk of burning out developers. First, you should have a sponsor who has experience transitioning to a DevOps methodology. You also need to make sure your team and management are on the same page with regards to transitioning to a DevOps process, as well as the strategies you will take to make the move. The change should first occur within new applications and new teams, so you do not break old habits on legacy apps. Lastly, make sure that you do your research and invest in the right tools. The right tools can make moving to DevOps almost seamless.

WATCH: Documentary shows information revolution of big data (CBS News)

Can you forecast the short-term future of DevOps?

Over the next 18 to 36 months I expect not only that containers will become mainstream, but container orchestration will also become standard. We'll see Kubernetes, Mesos, and other tools gaining more production time not only within startup companies but with more traditional organizations.

More and more, we are seeing various cloud vendors, both public and private, rise in popularity. Oracle recently moved into the picture, giving even more choice to enterprises looking for cloud solutions. While AWS will continue to reign supreme, I predict that slowly, enterprises will move to a multi-cloud environment.

Serverless is another way to run code which is expected to grow and can significantly impact DevOps processes, tools, and cost structure. In a short period of time, services including AWS Lambda, Azure serverless, and others have gotten significant traction in various use cases which make a large impact on the simplicity and cost of running such code.

Lastly, while DevOps popularity has steadily grown among startups, I predict it will become more mainstream for enterprises as well, as it has risen from what some thought of as a fad to a mature, proven process.

Read more

About Dan Patterson

Dan is a Senior Writer for TechRepublic. He covers cybersecurity and the intersection of technology, politics and government.

Editor's Picks

Free Newsletters, In your Inbox