Make Your Logs Work for You

The days of logging in to servers and manually viewing log files are over. SolarWinds® Papertrail™ aggregates logs from applications, devices, and platforms to a central location.

View Technology Info

FEATURED TECHNOLOGY

Troubleshoot Fast and Enjoy It

SolarWinds® Papertrail™ provides cloud-based log management that seamlessly aggregates logs from applications, servers, network devices, services, platforms, and much more.

View Capabilities Info

FEATURED CAPABILITIES

Aggregate and Search Any Log

SolarWinds® Papertrail™ provides lightning-fast search, live tail, flexible system groups, team-wide access, and integration with popular communications platforms like PagerDuty and Slack to help you quickly track down customer problems, debug app requests, or troubleshoot slow database queries.

View Languages Info

FEATURED LANGUAGES

TBD - APM Integration Title

TBD - APM Integration Description

TBD Link

APM Integration Feature List

TBD - Built for Collaboration Title

TBD - Built for Collaboration Description

TBD Link

Built for Collaboration Feature List

Tips from the Team

A Guide to Log Filtering: Tips for IT Pros

START FREE TRIAL

Fully Functional for 14 Days

Last updated: September 2024

As an IT professional, you’ll find log messages are one way to catch errors and solve your problems. As much as log messages are helpful, they can be confusing because a lot of messages—even the log messages you don’t need to see—are generated by the server. Instead of making your life easier, log messages can make things harder for you because unnecessary error messages are being logged.

In this post, I’ll show how you can filter log messages and make them useful for solving your application and infrastructure problems. Errors happen to every developer—newbies and senior developers alike—and can occur in any development environment and to professionals of any seniority. If you want to speed up the way you use a log message to solve errors, stick around.

What Is a Log Message?

Almost every developer or IT professional can answer the question “What is a log message?” Here’s a brief definition of a log from Wikipedia:

In computing, a log file is a file that records either events that occur in an operating system or other software runs, or messages between different users of a communication software. Logging is the act of keeping a log. In the simplest case, messages are written to a single log file.

A log file doesn’t contain a single message, but many messages. Because of this, you’re likely to see unwanted log messages in a log file. To avoid this, you’ll need to find a way to filter the logs: to only see what’s useful for your need, which is, of course, to solve a problem or keep a record of something.

Frustration-free log management. (It’s a thing.)

Aggregate, organize, and manage your logs with Papertrail

What Is Log Filtering?

Log filtering is the process of selecting relevant log data from a log file based on specific criteria. You can define your filtering criteria based on various log attributes like log level, context, message content, or timestamp. You can filter logs based on severity and importance using the log levels. In addition to the levels, you can include metadata like user IDs, session IDs, or request IDs for additional context. This will help you categorize log information. Using regular expressions or keyword matching can sometimes help you filter logs based on content in the log messages. Sometimes, you may want to get log information from a specific period. Here, you’ll filter the logs based on their timestamps or time range.

Log filtering can happen in different stages. You can filter at the application level using built-in log filtering. You can also filter logs in transport or when ingested by a centralized logging tool. When the logging tools ingest the logs, you can leverage the filtering capabilities offered by the tools or services.

Why Do We Need Log Filtering?

We need log filtering to manage logs, which helps reduce log volume. Here’s a list of other reasons why we need log filtering:

Noise Reduction

The sheer volume of log messages can become overwhelming without proper log filtering. This makes it difficult to identify and analyze log data. Log filtering allows you to reduce the noise and focus on the relevant logs to your needs.

Readability and Clarity

Logs may include irrelevant or redundant information. By filtering out unnecessary logs or focusing on specific log levels, components, or contexts, you can improve the readability and clarity of your log output. As a result, you can easily identify and analyze relevant events.

Compliance

Log filtering helps ensure that you retain only the necessary logs. This makes compliance easier and audits more manageable. For example, to comply with GDPR, you might need to filter and store logs that contain user consent records or data access logs.

Optimized Performance

Processing and storing large log volumes can be resource-intensive. However, filtering logs reduces the amount of data to be processed, stored, and analyzed, leading to improved performance and reduced costs.

Efficient Debugging and Troubleshooting

Log filtering can help you quickly isolate and focus on the relevant log messages related to a specific context, user, request, or component. This targeted filtering can significantly accelerate the debugging and troubleshooting by reducing the noise and highlighting the relevant information.

How Can You Filter a Log File and Find Relevant Log Messages?

Most of us spend too much time searching through logs hunting for information we can use to troubleshoot a problem. Log messages can be long and hard to understand, and even unhelpful messages for troubleshooting will still be logged. Here’s a list of ways you can filter a log file to focus in on the messages that matter.

Using a Grep Command

Running a grep command helps you get a keyword that can lead you to the message you want to see from the log file. Grep is a command-line tool you can use to find a certain string in one or more files depending on the regular expression pattern. Here’s an example of using grep on the Linux terminal:

$cat logfile.txt | grep --color=auto 'FAILURE' > new_logfile.txt

Let’s say you have a big file and you want to trace the line with the keyword FAILURE. Running the above command in the terminal will generate a new log file with a name as specified in the command. In this case, the name is new_logfile.txt. This will help you quickly identify the error message and act on it. You can also negate the command using a $cat logfile.txt | grep -v ‘failure’ new_logfile.txt. Since “failure” is the string keyword, the command will search the file and leave the lines without the word “failure.”

Using the AWK Command

AWK is a scripting language, and it’s one of the best commands in Linux. From the AWK documentation, AWK is “a program that you can use to select particular records in a file and perform operations upon them.” AWK is used to manipulate data. You can use it to process a log file by searching through it.

According to your needs, you can write commands to search for a particular selection. If you want to dig deeper into AWK, you need to see its documentation. Here’s a basic example using AWK in the terminal.

$ awk 'error' input-file.txt > output-file.txt

AWK will search the file and find any matching keyword “error.” You can also apply regular expression here to specify the search criteria. For example, if you want to search lines starting with the error keyword, you can write an AWK command like this: $ awk ‘^error’ input-file.txt > output-file.txt. Just like the above command, AWK will output the matching keyword in a file output-file.txt.

Using a Programming Language

Many programming languages have a way for you to log out messages from the server and save them in a text file. I’ve worked with both the Python and Django frameworks and had a better experience logging out messages using the Django framework. The language you’re using can give you better flexibility to write scripts you can use for your specific need, giving you more control. Here’s an example you can use to filter logs in Python:

import logging

logger = logging.getLogger(__name__)

class LogFilter(logging.Filter):
    def filter(self, record):
        return record.getMessage().startswith('keyword')

logger.addFilter(LogFilter())

The filtering called above will log only event messages containing the specified keyword. You can also filter by specifying the level of log message with just six lines of code in Python. The code can look like this:

import logging
logger = logging.getLogger(__name__)

class LoggingErrors(logging.Filter):
    def filter(self, record):
        return record.levelno == logging.ERROR

logger.addFilter(InfoFilter())

These few lines of code will be able to log error messages only.

Using Regular Expressions (Regex)

Sometimes, you may want to write your own filter instead of using a programming language. You can use regular expressions, also known as a regex, to filter logs. Regex can be helpful in filtering your logs to get exactly what you want to see from the log message. Regex also gives the flexibility of writing an expression of your choice. This can be handy when dealing with a log file containing more complicated filtering. In a previous section, I talked about the power of the Linux tool grep when used for searching text files. Using it with regex makes it even more powerful.

For example, if you want to filter IP addresses from a log file with text and digits, you can write a regular expression to get only digits and dots. As you know, an IP address is made of digits connected by dots. A simple but helpful regex combined with grep can look like this:

grep -o '\d\d*\.\d\d*\.\d\d*\.\d\d*'

The result will be as awesome as expected: a well-formatted IPv4 address grouped in four digits like 64.242.88.10.

Using Tail Command

Tail, a command for Unix-like operating systems, is another import and an exciting tool to work with. If you specify a number, it outputs that amount of the last lines from a file. By default, the tail command will output the last 10 lines. When used with -n, tail gives you the ability to specify the number of the last lines you want to be outputted from the file. Most of the time, you want to check the last line of the file from thousands of lines of logs. Using the tail command is as simple as this:

tail -n 1 /usr/share/dict/file.log

If you want the last five lines from a log file, it’s simple: Just increase the number after -n. In this case, -n 5 will give you the last five lines. The tail command also lets you watch the file changes. Just pass -f. This can look like this: tail -f /usr/share/dict/file.log.

Using PowerShell to Filter Logs

You can use PowerShell to log messages from the server. From the Microsoft documentation, “PowerShell is a cross-platform task automation and configuration management framework, consisting of a command-line shell and scripting language. Unlike most shells, which accept and return text, PowerShell is built on top of the .NET Common Language Runtime (CLR) and accepts and returns .NET objects.”

As described in the documentation’s description, PowerShell can be used to filter logs. Here’s an example that returns the last 20 error event logs:

Get-WinEvent –FilterHashtable @{logname='system'; level=2} –MaxEvents 50

–FilterHashtable is helping us map the key and value. If you look at the command, we have the level of event logs to Level 2, and this level echoes error messages.

Conclusion

It can be difficult to get the actual line of an error message or just about anything you’re looking for in a log file. Applying some filtering makes your work easier and less tedious.

Many servers are running Linux or a Unix-like operating system. This makes using the Linux commands has such as grep, tail, and AWK even easier.

In some cases, you don’t want to write any command or a line of code to filter the log file. In such cases, there are systems designed to help you with filtering logs and give you the precise results you want in real time.

If writing commands and regular expressions is not your thing, you can try SolarWinds® Papertrail, a cloud-hosted log management system designed to consolidate your log files and allow you to search them from a single interface.

You can use Papertrail to filter, search, and analyze your log messages. Papertrail provides a powerful yet simple search syntax similar to a Google search. You can use Boolean operators such as AND and OR to filter results, and you can find log messages that don’t contain a string by using the negation operator (–).

Papertrail also offers a live tail feature, which works just like running a tail -f on a log file. It allows you to view new events as soon as they’re received. Using the live tail feature along with the log velocity graph, which visually presents the changes in event messages over time, allows you to zero in on hotspots and focus your troubleshooting. Whether you spend a lot of time searching logs or just look at the logs when something breaks, SolarWinds Papertrail simplifies filtering and searching logs. Start a free trial today and see how easy searching logs can be.


This post was written by Mathews Musukuma. Mathews is a software engineer with experience in web and application development. Some of his skills include Python/Django, JavaScript, and Ionic Framework. Over time, Mathews has also developed interest in technical content writing.

Aggregate, organize, and manage your logs

  • Collect real-time log data from your applications, servers, cloud services, and more
  • Search log messages to analyze and troubleshoot incidents, identify trends, and set alerts
  • Create comprehensive per-user access control policies, automated backups, and archives of up to a year of historical data
Start Free Trial

Fully Functional for 30 Days

Let's talk it over

Contact our team, anytime.