Keeping Track of What Goes On — Part II

Last month, we discussed setting up and configuring the syslog facility. This month, we will look at two additional considerations that come into play where syslog is concerned. First, we need a way to manage all the log files that we are creating and insure that they do not consume too much disk space. Second, we will need to have a strategy for processing all of the information and discerning what is most important within it. All of the log files in the world are of little use if no one looks at them. This column will explore both of these issues.

Last month, we discussed setting up and configuring the syslog facility. This month, we will look at two additional considerations that come into play where syslog is concerned. First, we need a way to manage all the log files that we are creating and insure that they do not consume too much disk space. Second, we will need to have a strategy for processing all of the information and discerning what is most important within it. All of the log files in the world are of little use if no one looks at them. This column will explore both of these issues.

The traditional solution for managing Unix log files involves periodically saving their contents to another file, and then truncating the active log to zero length. Typically, several old log files would be saved on the system and given names consisting of the original file name with a numeric extension: message.0, message.1 and so on for the messages log file, with higher numbers indicating older saved files.

Generally, some specific number of old files would be present on the system. Each day, the oldest one would be deleted, existing saved files would have their numbers increased by one and the current contents of the active log file would be saved to the .0 file. That is, each day’s messages would be rotated through this fixed set of saved log files (finally falling off the end after a preset period of time). Note that in some cases, the rotation interval is a week, or even a month.

In The Rotation

The logrotate command serves to automate this common practice and is installed and enabled by default on Red Hat Linux systems. The command itself rotates one or more log files by copying the current contents to a numbered backup version, and then truncating or recreating the original file. It is actually executed periodically via the cron facility. logrotate is controlled by its configuration file, /etc/logrotate.conf and the various files in /etc/logrotate.d.

Figure One contains some sample entries from the /etc/logrotate.conf file which will give you an idea of both how the facility operates and how the configuration file syntax looks.

Figure One: Sample Entries From /etc/logrotate.conf

# default settings: compress old logs, rotate daily,
# keep 7 old files for 7 days, truncate after rotating
rotate 7

# cron: ignore if missing, don’t rotate if size=0, keep 1 old file
/var/log/cron {
rotate 1

# syslog: use defaults, restart daemon after rotation
/var/log/messages {
/usr/bin/killall -HUP syslogd

# secure log: default rotation, don’t compress old logs,
# recreate log with perm and owners correct
/var/log/secure {
create 600 root root
/usr/bin/killall -HUP syslogd

# process additional configuration files in the log rotate subdirectory
include /etc/logrotate.d

The first section of the configuration file specifies defaults for logrotate to use when processing log files. The following three sections specify how to handle three specific log files (/var/ log/cron, /var/log/ secure, and /var/log/ messages). A final section directs logrotate to read additional configuration information from all files within the subdirectory /etc/logrotate.d (on Red Hat Linux systems, many packages will install a configuration file containing instructions relevant to their functioning in this location). For example, on my system, the cron, UUCP, syslog and linuxconf facilities have installed logrotate configuration files there.

When logrotate actually processes a log file, there are several phases to its activity. First, any commands in a prerotate section of the relevant configuration file entry are executed. Next, a numbered backup file is created. After that, any commands in the configuration entry’s postrotate section are executed (there are two examples above). Finally, the copied (old) log file is compressed (if requested). You can use the pre- and post-rotation hooks to perform whatever actions make sense within the context on your systems. For example, you could copy the oldest saved log file to a different location for backup to tape before it is overwritten by logrotate.

The sample logrotate configuration file in Figure One rotates logs on a daily basis. The facility can also rotate files on a weekly or monthly basis, and can also rotate logs based upon their size (i.e., saving and truncating them only when they exceed some preset limit). Saving old files is also optional. It can be configured to simply truncate a given log file on a periodic basis.

When it comes to processing information stored in the various system logs, there are a variety of utilities and packages that have been written for just this purpose. We will consider two of them in this column: swatch and logwatch.

Swatch On the Watch

swatch was one of the very first utilities designed to process information stored in system log files. It is widely available in Linux archives, and you can also obtain it from its official location, ftp.stanford.edu/general/security-tools/swatch. swatch can run it in a variety of modes: examining new entries as they are added to a system log file, monitoring an output stream in real-time, checking through a file on a one-time basis and so on. swatch works by looking for predefined patterns in whatever input it is examining, and then by taking predetermined actions based on finding matching patterns. All of this is set up as usual, via a configuration file, which is typically ~/.swatchrc.

Figure Two contains some sample entries from a swatch configuration file. The various patterns in this file are regular expressions designed to manage certain entries within the system messages file. They are enclosed in slashes (similar to sed syntax). The first entry in the file looks for lines containing the strings FAILED LOGIN 3 and FOR root, which indicate a third consecutive failed login attempt by user root. Whenever such a line is found, swatch will run the indicated xmessage command notifying the logged in user of this fact (presumably, this is run on the workstation of the system administrator, which also is the location of the central syslog files).

Figure Two: Sample Entries From a Swatch Configuration

#                                                                 repeat        timestamp
# pattern action interval format
# ======= ====== ======== =========
/FAILED LOGIN 3 .* FOR root/ exec=”xmessageroot loginfailureover3″
/useradd/ echo
/file system full/ mail=chavez :00 0:16
/repeated[1-9][0-9][0-9]times/ exec=”/usr/local/bin/etherr.csh” 02:00:00 0:16

The second entry searches for the word useradd within the log file, echoing matching lines to the swatch process’ standard output. The third entry looks for the phrase file system full and, if it finds it, mails the entry to user chavez. This entry uses the optional third and fourth fields to specify a repeat time interval during which additional messages of the same time are ignored; in this case, additional file system full messages produced within 30 minutes of the first matching entry will be ignored. The fourth field is needed to use this feature. It specifies the starting character and straight length of the timestamp portion of each log entry (here, the first 16 characters). The final entry looks for the word repeated, followed by three digits, followed by the word times: for example, repeated 127 times. These messages typically occur when there is a network hardware problem (and they are preceded in a log file by a message indicating an Ethernet problem). If such a line is found, then the shell script /usr/local/ bin/ether.csh will be executed by swatch. Additional messages of the same time, produced within two hours of the first one, are ignored.

All in all, swatch is a facility that makes it possible for you to process log file data based upon criteria that you define and set up. As such, it is very useful. However, it takes a fair bit of time to become familiar with message formats, design a log file processing strategy and set up the swatch configuration file with all the associated script and cron jobs required to automate the entire process. For this reason, some administrators prefer to use a utility that has already done a lot of this work for them. There are many such programs available, and logwatch is one of the best.

Watching Logs

logwatch is available from http://www.kaybee.org/~kirk/. Like swatch, it is written in Perl. When installed, it creates the subdirectory /etc/log.d, which holds its configuration information and executable scripts. This directory tree has several important components:

  • Definitions of log file formats that logwatch knows about.
  • Definitions of log file entries and locations for various system services (facilities).
  • Perl scripts to process log files.
  • Perl scripts to report on log file entries for specific services.

These files define the sorts of log files that logwatch can handle and the actual reports that it will produce about them. Examining them, while not necessary to use this utility, does enable you to understand the assumptions that logwatch makes. For example, it assumes that syslog will have been configured to send authentication messages to a separate file named secure.

You can specify which log files to examine, which services to consider and what date range of entries should be included whenever logwatch produces a report. This may be done via command line options or by settings within the logwatch configuration file, /etc/log.d/logwatch. conf. You can also indicate what level of detail you want included within the report. The logwatch facility defines three such levels.

Figure Three presents a detailed logwatch report covering all major facilities and log files on the system where it was run for the previous day. It contains sections for the cron, init, PAM authentication, mounted daemon and inetd daemon subsystems, and summarizes recent activity within them. (Note that we have included only a few illustrative items in each section from the actual report.)

Figure Three: Sample logwatch Report

########### LogWatch 1.6.6 Begin ##########
———-_— Cron Begin —————-
Commands Run:
User aefrisch:
personal crontab listed: 1 Time(s)
User root:
run-parts /etc/cron.daily: 5 Time(s)
run-parts/etc/cron.hourly:138 Time(s)
run-parts/etc/cron.weekly: 1 Time(s)
————- Cron End ——————-

————- Init Begin —————–
Switched to runlevel 0 – 1 Time(s)
Switched to runlevel 6 – 9 Time(s)
————- Init End ——————-

————- PAM_pwdb Begin ————-
SU Sessions:
aefrisch(uid=371) -> root – 6 Time(s)

Opened Sessions:
Service: login
User aefrisch – 5 Time(s)
Service: rsh
User aefrisch – 2 Time(s)

Failed logins:
ada: 5 Time(s)
User root:
bajor: 1 Time(s)
/dev/tty1: 2 Time(s)
————- PAM_pwdb End —————

————- Mountd Begin —————
Successful NFS mounts:
ada (
/pix: 9 Time(s)
/cdrom: 5 Time(s)
/seti: 3 Time(s)
/: 1 Time(s)
————–Mountd End —————–

—– Connections (secure-log) Begin —–
Service in.rlogind:
bella ( 5 Time(s)
Service in.rshd:
bella ( 3 Time(s)–
—– Connections (secure-log) End ——-
############# LogWatch End ################

As you can see, the logwatch report contains very useful summary information about significant system activity over the time period under consideration. logwatch generally works very well as installed. But, you will need to modify the configuration file for your local environment and may, in some cases, also need to tweak the definition files a bit.

Hopefully, over the past two months, you have learned a sufficient amount about the importance of system log files, as well as some useful techniques for administering them and processing their output. Whatever you do, don’t neglect this important aspect of the system administrator’s job. See you next month.

Æleen Frisch is the author of Essential System Administration. She can be reached at aefrisch@lorentzian.com.

Comments are closed.