There are better ways to read log files than scrolling hundreds of lines in a terminal window.
Using standard tools in Linux, logging becomes effective and simple. In addition, the tools are versatile and worthwhile to master.
I considered structuring this post based on the tools, but a lot of good resources with that approach already exist. Instead, I have outlined the article based on three scenarios when reading logs: runtime parsing, post-parsing and post-processing of log files.
If you want to look at the logs of a running application, use tail.
tail –F logfile
tail together with the F option, “follows” the log file, printing any new data added to the file.
Parsing multiple log files at the same time
Newer versions of tail allows you to parse several files at the same time. Simply add more -F items to the command, for example:
tail -F logfile1 -F logfile2
I usually have several terminals open and use
grep to capture different aspects of log files. Here are some examples:
- Displaying all warning and error messages
tail –F logfile | egrep "(WRN|ERR)"
- Looking at the logs from a specific file (providing of course the log output provides this information)
tail –F logfile | grep -i "main.cpp"
- Looking for a specific context
tail –F logfile | grep -A 10 -i "specific message"
Depending on your needs, you can opt for
fgrep instead of
grep. The former uses extended regular expressions and the latter does textual matches only.
Some useful grep options
-i performs a case insensitive search
-A nn also prints the nn lines after the matched pattern. There are also options to print lines before, and around, the matched lines.
-v prints lines that do not match the pattern.
When looking at the logs after the application has run, grep is still very useful to filter the output. Instead of
cat or a file viewer such as
less is more appropriate.
Sometimes I filter the log with
grep and then pipe it through less. This can be useful if the log file is really, really big and you don’t need certain severity levels or only output from certain components:
grep –v "VRB" | less -R
less is a great tool, and being familiar with some of the options and commands makes it really powerful for parsing logs.
Some useful less options
-R is useful when the log uses ANSI color codes.
-M long prompt containing line numbers etc.
-i makes searches case insensitive
Post-processing of logs can be useful, too. For example stripping logs to facilitate comparisons using diff, or to save subsets to include in bug reports.
Post-processing for these purposes can be done with
sed. For example, to strip a log that includes the system time:
DBG: Writing object to DB: id=234 [234.273412]
can be stripped of the time information with this command:
sed -e "s/\[[0-9]*\.[0-9]*.*\]//" logfile > log.stripped.txt
log.stripped.txt will contain lines like this:
DBG: Writing object to DB: id=234
Two other examples where
sed is used to extract subsets of a log file:
sed –n "100,200 p" logfile
sed –n "/\[500\.0/,/\[600\.0/ p" logfile
The first line above extracts lines 100 through 200. The second line extracts the lines between 500 and 600 seconds for a log containing timestamps like the example above.