Logs are often the foundation of metrics and observability infrastructure because they contain business-level statistics that help you and your team make decisions. Without them, it’s impossible to know how often users are hitting errors or how the latencies in your services are varying over time.
Yet not all logs are used in real time. Some are only needed during investigations and troubleshooting and like a rainy-day fund, you don’t know when you’re going to need it, but you’ll be glad it’s there when you do. When the time comes to dig into your log data, you need to confident all log data was captured, you know where the data is, and you can efficiently search through the frequently large volumes of log messages. And nothing supports these better than a cloud-based log aggregator.
1. Trace Events Through Multiple Services
In the past, developers and admins had to SSH into individual servers to inspect log files. As the size and complexity of infrastructure has grown, this method has become infeasible. Instead, log files need to be pooled in a central location, so the data can be viewed holistically.
Aggregating logs from all your services in a central place is crucial for analyzing the behavior of your systems at scale. For example, when a user visits your website, their request for a single webpage may touch multiple services, such as authentication and database back ends. Tracing the request means you need to connect multiple events to understand how your software generated a response, and cloud-based log aggregators are a great way to do this.
2. Powerful Search
Whether you’re displaying real-time statistics using data pulled from your logs or performing a deep dive to better understand your software stack, quick searching and filtering is necessary from any log management tool. Cloud-based log aggregators provide familiar query syntaxes to cut through large volumes of log data to find the log messages you need.
Unlike traditional tools such as awk and grep which require you to use complex regular expressions, many log aggregators support boolean keywords and operators, such as AND or OR, and the minus sign for negation (–) that, along with parentheses for grouping, allow you to build complicated expressions from smaller pieces. These syntaxes allow you to run searches across all your log data simultaneously. And remember, because your log files are stored in the cloud, searching through them is incredibly fast.
3. New Formats and Parsers Added Automatically
Software in the cloud can be updated automatically—you can continue using the software without interruption and it gets better underneath your feet. New features and performance improvements are added without you having to do anything, and in the specific case of log aggregators, new file formats are supported as they gain popularity. There are only a small handful of formats developers use day in and day out, but there are several possible formats used across all the tools you might use in a month.
The number of file formats supported by a log aggregator influences how easy it is to search through those logs because each format needs its own parser. If the log aggregator supports querying specific fields in a format, then individual parsers are needed to understand how to map your query string to the log message fields.
For example, some cloud-based log aggregators allow you to search using log-format specific fields such as IPv4 addresses, program name, and hostname.
Here’s an example of this feature in practice. You can search through the logs generated by a specific program, or list of programs, using a program: attribute. To find all logs from the httpd and sshd programs, you can use this query:
program:(httpd sshd)
4. Sudden Spikes in Volumes of Log Data
It’s easy for the size of your log files to quickly grow out of control when you’re running internet-scale apps and services, especially if there’s an outage or service event. Storing your log data in the cloud allows the storage capacity to seamlessly scale as your demands increase. Many organizations and teams need to keep log data around for years to comply with audit requirements for their industry. Cloud-based log aggregators make this easy to do because you can keep all your data in the same place and accessing it won’t slow down even when you keep adding more.
Of course, storing your data in the cloud isn’t without risks, and you need to know your data is safe. Many cloud-based log aggregators provide a way to store your log data using encryption-at-rest, protecting your data with encryption standards such as AES-256, which is used by Amazon S3. For extra security, you can also transmit your data using the Transport Layer Security (TLS) cryptographic protocol, protecting your data from potential attackers and criminals.
5. Lock Down Log File Access
Once your log data is conveniently aggregated in one place, it’s time to decide who should have access to which log files. Access control can be configured centrally on the log aggregation server, so you can make sure the dev teams can see crash reports while keeping things separate for the ops teams. Permission to purge old log files is also something you can lock down, so only trusted members of the ops team can delete them.
Some log aggregators also enable you to create log groups, which allow you to group related log files, making access control easier for large teams. For example, you may tag all log files coming from a sender with the hostname “appserver” as “production” and all logs from “testserver” as “staging,” then grant access to individual teams based on whether they’re responsible for keeping the staging or production servers running smoothly.
Conclusion
Logs are the lifeblood of modern software organizations and are used for real-time analysis and reporting, investigations, and troubleshooting. Maintaining easy access to those logs is vital for every team, and cloud-based log aggregators provide a cost-effective way to keep your log data around while tightly controlling which team members get to access individual log files.
SolarWinds® Papertrail™, a popular cloud-based log aggregator, offers a powerful yet simple search syntax that allows you to search all your logs from a central interface, see events in context, and pinpoint issues. Papertrail also offers a live tail feature, which works similar to running tail -f on a log file. It allows you to view new events as soon as they’re received and is particularly useful for real-time troubleshooting. Log aggregators make it easy to quickly search through your logs and find the information you need no matter what format they’re using. Whether you’re using the live tail feature to watch them in real time or need to quickly search through your logs when troubleshooting, the five reasons in this article make it a no-brainer to aggregate your logs in the cloud