Maintaining system logs is a key responsibility for system administrators tasked with keeping devices up and running with as little downtime as possible. Properly tuned logs can monitor for impending problems (for instance hard drives anticipated to fail or quickly filling up) and alert appropriate staff so they can address minor issues before they become major emergencies.
I’ve used Splunk for several years because it offers the features I consider essential to appropriate and efficient log management. It offers a centralized web-based interface on a server to which all systems send their respective log files and can store data indefinitely, providing useful graphs to outline system trends and issues.
SEE: Gartner’s top tech predictions for 2021 (free PDF) (TechRepublic)
Centralization is a key component to log management success since having to configure individual systems to send alerts is tedious and time-consuming, not to mention prone to failure. Splunk provides a powerful array of alerts… sometimes a little too powerful. If throttling of alerts is not configured, system administrators might be bombarded with repetitive alerts which leads t the temptation to disable alerts, which is ill-advised.
It’s also important to quantify alerts based on urgency and direct them where appropriate; for instance, to an on-call staffer for critical-level notifications and to an email distribution list for minor-level notifications.
The skills required for effective responses to log entries are complex yet pay vast dividends. I spoke to Ariel Assaraf, CEO of Coralogix, about the topic.
Scott Matteson: Where are some of the challenges involved with log management and analytics today?
Ariel Assaraf: One of the biggest is that even though there are many log analytics tools available in the marketplace, most customers are not satisfied with their current solutions. Logs are an integral part of any software, and logging challenges change and evolve so fast.
When it comes to achieving a high level of observability with their log analytics solutions, most companies struggle with balancing between cost and coverage. Because traditional solutions essentially charge a flat rate for data ingestion and storage, teams are forced to choose between paying unreasonably high costs for data indexing and storage or simply not collecting certain parts of their data.
In the end, every company using these traditional solutions faces coverage gaps and suffers from a lack of observability around how their systems are operating.
Scott Matteson: Why are current solutions not addressing these gaps? Why are emerging platforms like you entering the market among established players like Sumo Logic and Splunk?
SEE: The future of work: Tools and strategies for the digital workplace (free PDF) (TechRepublic)
Ariel Assaraf: It’s important to take a step back and look at how things started. Originally, log analytics tools were launched simply to help companies centralize all of their logs. The response from the marketplace was overwhelmingly positive.
Fast forward 15 years, and companies are now working with cloud-native applications, distributed systems and accelerated workflows. All of these advancements have created additional challenges when it comes to managing log data. Again, a main issue is that they are known for being expensive and difficult to analyze.
This has all driven a growing movement of companies seeking machine learning-powered solutions to enable them to manage and derive value from their massive pools of log data without breaking the bank. We’ve carved out a leadership position in the marketplace by offering a comprehensive solution to this challenge.
Scott Matteson: Why is DevOps turning to hybrid solutions that incorporate open source? What are the advantages?
Ariel Assaraf: Mainly because for companies that want to grow and scale quickly, offering them proprietary tools isn’t an option. Those same companies want the ability to adapt to new technologies easily and avoid being locked into one vendor. We decided to build our capabilities around open source tools because it would give our customers full visibility into what they are getting from us. It also allows them to do a comparison between what we offer and the free alternatives available. We’ve also found that this approach has shortened Coralogix’s learning curve and allowed our customers to share data pulled from Coralogix across their complete stack.
Scott Matteson: What investments should companies be making in log management and analytics?
SEE: Kubernetes: A cheat sheet (free PDF) (TechRepublic)
Ariel Assaraf: One of the biggest concerns we are seeing is the lack of standardization and the loss of control of data sources and volumes. Adopting a platform that can receive data from multiple components and standardize it based on each company’s policy can really help companies scale their logging and monitoring.
In many, if not most, cases, companies are already spending huge amounts of money to ingest and index their log data. So, it’s not about additional investments, it’s about shifting those investments to get more value from them.
Our technology is the only one currently available that analyzes data before it’s indexed and stored to provide the most extensive monitoring and alerting solution in the market while simultaneously saving customers up to 70%.
Scott Matteson: How do you see the market evolving? What’s next for the space?
SEE: From start to finish: How to deploy an LDAP server (TechRepublic Premium)
Ariel Assaraf: I see the market moving in a more customer-centric direction. It’s not about offering set packages or tiered solutions anymore, it’s about going above and beyond to provide the customers with the value that they need—even if they don’t see it yet themselves.
All of the other tools in the market have been around since before microservices and CI/CD were around. We built Coralogix after the emergence of these new developments and we built the platform specifically to support them. One fundamental difference in our platform is the way that we enable our customers to analyze and prioritize their data before indexing so they get insights faster and save a ton of money on licensing and storage costs.