Whether you're managing a major network installation or a single workstation, if your systems are connected to the Internet, they're at risk. Understanding the basics of Linux security is your best defense.
|ILLUSTRATION © TONY KLAUSSEN|
Back in the old days security was a pretty straight-forward affair. If you wanted to secure something, you just kept buying locks and alarm systems until you felt secure. Back then, it actually took a good deal of planning and physical effort for someone to break into your business, your home, or anything else that had been “secured.”
Welcome to the Internet and the Year 2000
Today, networked computing is a business imperative and a fact of everyday life for home computer users. The downside to this is that the more we allow networked systems into our everyday lives, the less secure our businesses and homes become.
Worse yet, the bad guys don’t even have to exert much effort to attempt a break-in. There are lots of scanning and cracking tools available that know how to find and exploit known weaknesses on most computer systems.
Your best defense against this kind of threat is to understand the basics of system security. This will enable you to implement the necessary defenses against potential crackers. This will also give you the tools you need to monitor your system’s security on an ongoing basis.
Luckily, a vast number of tools exist to aid you in this pursuit. There are host and network system scanners that can probe your systems and insure that a basic level of security is being maintained. There are also system monitoring and auditing tools that allow you to determine when an intrusion may have taken place and enable you to put a stop to it.
The goal of this article is to familiarize you with the most important security tools and techniques you will need to make your Linux system as secure as possible.
If a system has been compromised, there is nothing more useful than a complete log of how the attack occurred. This information is recorded in the system logs. The real challenge for a system administrator lies in determining what information needs to be logged and what can be discarded. A system administrator also needs to make sure that the log files themselves have not been tampered with to hide the intruder’s tracks. The art of system monitoring largely involves analyzing system logs and being able to tell when something has gone wrong.
An audit trail must be kept that supports a later reconstruction of who did what. This information provides accountability and can impede an intruder’s attempt to remain anonymous and untraceable. This information must be securely maintained. Security and system logs are prime targets for a cracker. Steps should be taken to make your logs as secure as possible.
The program that determines which information gets recorded in the system logs is named syslog. It has a configuration file named /etc/syslog.conf that should be configured to tell syslog to log the various types of system messages to separate files. For example, there is a “kernel” log file for messages relating to kernel-level events, and there is an “authentication” log file where all login attempts are recorded. (This month’s and last month’s Guru Guidance columns contain a two-part article covering how to use syslog, so we won’t go into great detail about it here.)
Configuring syslog to send copies of the logs to a remote server is an excellent step towards making your logs as secure as possible. To configure syslog to log to a remote host, you would set up your /etc/syslog.conf file to look something like this:
This would tell syslog to send any messages related to debugging to the remote log server named loghost. (Again, please see last month’s Guru Guidance for more details.)
When configuring syslogd, or any other application that produces log messages, be sure that it is logging at least the following information:
- Authentication attempts — user logon and logoff attempts, as well as external interface authentication attempts
- Access attempts to security or critical files or devices.
- Communications failures
- Administrator (root) and system security administrator actions
- Security override function activation and deactivation.
- System integrity anomalies
There are some very good log analysis programs out there that can alert you to events that could potentially indicate an unauthorized access. One of the best of these tools is Swatch, “The Simple WATCHer and filter.” Swatch is written in Perl and monitors your logs in real-time. Swatch sends messages to the system administrator whenever specific alerts are triggered. (For more on Swatch, see page 74 in this month’s Guru Guidance.)
Developing a baseline (or a sense of what constitutes “standard activity”) on your system can help you to recognize anomalous events that may point to suspicious activity. By carefully examining your logs over an extended period, you will gain a feel for what the baseline on your system really is. (See the section on System Auditing below for more on developing a baseline.)
If there is reason to believe that your system has been compromised, process accounting provides an additional audit trail by allowing you to track every command that is executed on the server. The process accounting tools record their data in files like /var/log/wtmp and /var/log/lastlog (see Table One, for a more complete list of audit trail log files). These files contain information such as the terminal a given user logged in to; the IP address the login came from and the date, time, and duration of their login. Keep in mind that attackers are well aware of these files and will go to extra efforts to remove or modify them.
TABLE ONE: Common audit trail files.
standard location for all log files. Most should be readable only by root or admins
stores failed login attempts. Use lastb to view. File must be created manually
stores processes run by users on system after accounting is configured
tracks user logins
stores user login information. Use last to view
contains standard syslog information
used by programs such as who to determine list of logged in users
In order for process accounting to work the
option must be enabled when your kernel is compiled, and the psacct accounting package must be installed on your system. Chances are that your distribution came this way by default. It may also be necessary to configure your system startup scripts to enable the accounting process. If it was not enabled at boot time, process-accounting can be enabled from a shell prompt by issuing the command:
The accounting file will accumulate data about process activity for as long as the system is running. You should make sure that this file is readable only by root.
There are a number of programs that can help you sort through the data stored in the process-accounting log files. For example, the lastlog command can be used to track user logins. Another simple tool, the last command, can tell you the last time a given user logged in to the system.
One very useful tool is the lascomm command. It reports the command, username, terminal, number of CPU seconds consumed and both date and time of execution. Table Two contains some sample output from a typical lastcomm command.
TABLE TWO: Typical lastcomm output.
root# lastcomm dave
procmail S dave ttyp1 0.01 secs Sun Aug 9 07:02
procmail S dave ttyp1 0.01 secs Sun Aug 9 05:02
bash S dave ttyp1 0.02 secs Sun Aug 9 02:10
pine dave stdin 1.08 secs Sun Aug 9 02:03
ls dave stdin 0.02 secs Sun Aug 9 01:00
The information provided by the process accounting programs provides another piece of information that can be used to help spot suspicious behavior. However, if the system has, in fact, been compromised, there is no way to ensure the integrity of the accounting database or the programs that manipulate it.
The best way to insure that you will notice any change in the integrity of your system is by conducting system audits on a regular basis. To conduct an effective audit, you must have an established security policy and a well understood “baseline” that allows you to distinguish an intrusion attempt from a regular daily occurrence.
The audit itself basically consists of checking your system against your security policy on a periodic basis to ensure that your security policy is being implemented correctly. A good way to do this is to develop a checklist of security items that allows you to quickly compare your “baseline” activity against the actual running system. Doing this will allow you to more easily detect changes to the system.
When developing the activity “baseline” you will use as a reference consider the following guidelines:
- Normal daily activities of the users on the system — Is it strange to see user login after normal work hours?
- System activity levels at various times during the day
- Status and attributes of key system files, permission, and ownership of files and directories
- Configuration of important system files
- Integrity of important system binaries, such as /bin/ login, /bin/ps, etc.
System auditing requires regularity, consistency and thorough attention to detail. Complacency is just what an intruder is hoping for. No self-respecting intruder is going to leave a telltale sign that your box has been compromised. Monitor the daily activity on your host and learn what the daily log patterns are so that you are later able to recognize potentially threatening occurrences.
One tool that is very useful in conducting system audits (and in establishing a “baseline”) is the nmap port scanner. Nmap is used to scan for open ports across an entire network or on a single host. It allows you to see what network services your Linux box is offering. This allows you to ensure that you are only running the services that you expect to be running. Running nmap on a periodic basis can help to determine if a host is offering an unauthorized network service, which could indicate that an intruder has installed a Trojan horse on your system.
Table Three contains an example of an nmap scan of a single host on a local network. The host is running a Web server. We can use this information in order to reconcile what is actually running on this particular host versus what is supposed to be running and then make any necessary corrections.
TABLE THREE: Typical nmap output.
[root@juggernaut /root]# nmap -O -n -sX 192.168.1.100
Starting nmap V. 2.54BETA1 by email@example.com ( www.insecure.org/nmap/ )
Interesting ports on (192.168.1.100):
(The 1515 ports scanned but not shown below are in state: closed)
Port State Service
22/tcp open ssh
25/tcp open smtp
80/tcp open http
110/tcp open pop-3
143/tcp open imap2
TCP Sequence Prediction: Class=random positive increments
Difficulty=5762970 (Good luck!)
Remote operating system guess: Linux 2.1.122 – 2.2.14
Nmap run completed — 1 IP address (1 host up) scanned in 5 seconds
Nmap should be run on a regular basis on all hosts for which you’re responsible. If you don’t run it, you can certainly expect a cracker to do it for you at some point. In fact, Nmap has many features that are valuable to a potential intruder. It has the ability to conduct “OS Fingerprinting” on a given host. Since every operating system has known vulnerabilities, that information can be used to determine what type of exploits will run on that particular system. Stealth scans that attempt to subvert firewalls and intrusion detection systems can also be performed.
Another very useful tool for developing a baseline and conducting audits is IPTraf. IPTraf is a console-based network statistics utility that gathers information about network connections on your Linux box. It breaks down the connections by protocol, byte count, and interface. With this information, you can monitor the network traffic on your system and watch for specific patterns. If something varies from the regular pattern, you may have a problem.
So all this process accounting and system auditing is fine, but how do you know that there’s not something really sneaky going on? For example, how do you know that the new programs you are installing are not secretly e-mailing sensitive information to some system cracker somewhere? Or, how can you tell if someone has broken into your system and quietly replaced a key system binary, such as the login program, with a “special” version of that program that records your username and password and forwards the information along to someone else?
The answer to these questions is that you need to be able to verify the integrity of your key system files. There are a number of tools available that can do this. When it comes to installing new software, many Linux distributions use the RPM (Red Hat Package Manager) system for their package management. RPM comes with integrity-checking built in. Every package has a unique “checksum” that the package manager can verify in order to determine whether a package has been modified or not. See the man page on rpm for more details.
Another utility that is very useful for verifying the integrity of key system files is the md5sum command. md5sum can be used to create a “fingerprint” of a file. The fingerprint is strongly dependent upon the contents of the file to which md5sum is applied. Any changes to the file will result in a completely different fingerprint. md5sum is commonly used to verify the integrity of package updates that are distributed by a vendor. For example, a typical RPM package update could look something like the following:
The md5sum-generated fingerprint belonging to the file package-2.2.1-1.i386.rpm is the 128-bit number printed above (f380646e78a1f463c2d2cc855d3ccb2b). The fingerprint can be used to correlate the integrity of the updated package before it is installed. To verify this file’s integrity, you would type:
[dave@magneto ~]$ md5sum package-2.2.1-1.i386.rpm.
Running the md5sum command on the file package-2.2.1- 1.i386.rpm produces output indicating that the fingerprints match. This means that the file has not been tampered with. Keep in mind that this method doesn’t take into account the possibility that the same person who may have modified a particular package might have also modified the fingerprint.
Another very useful program for verifying the integrity of a number of important system files is named tripwire. Tripwire looks at a number of checksums based on all of your important system binaries and configuration files and compares them against a database of former, known-to-be-valid checksum values for a reference. Thus, changes in any of the files will be flagged.
Setting up and configuring tripwire is not terribly difficult. However, managing tripwire requires daily monitoring of the checksum database and coordination with other users on the system to make sure that authorized changes to configuration files are properly accounted for, while unauthorized changes will be flagged.
It is also a good idea to make a copy of critical system files and store them on removable media to be used as a form of integrity checking. Programs such as /bin/ps and /sbin/ifconfig should always be readily available.
A Network Intrusion Detection System (NIDS) is a program that is responsible for detecting anomalous, inappropriate or unauthorized data that may be flowing across a network. Unlike a firewall, which either allows or denies access to a particular service or host based on a set of rules, a NIDS captures and inspects all traffic regardless of whether or not it’s permitted. The NIDS inspects all packets and can generate alerts based on the contents of the packets.
Until now, intrusion detection devices were either dedicated-use commercial products or not real-time and difficult to install. Enter snort.
Snort is a “lightweight” NIDS that is non-intrusive, easily configured, utilizes familiar methods for rule development and takes only a few minutes to install. It has the ability to detect more than 1,100 potential vulnerabilities. Snort is a great solution for monitoring small TCP/IP networks where deploying commercial products would not be cost-effective.
Snort is excellent at capturing information about the type of exploit that may have been performed. Table Four contains some examples.
TABLE FOUR: Sample Snort output.
[**] IDS234 – WEB-CGI-Cgiwrap CGI access attempt [**]
07/07-09:40:34.806312 192.168.113.35:34144 -> 184.108.40.206:80
TCP TTL:118 TOS:0×0 ID:17063 DF
*****PA* Seq: 0x683DF6EA Ack: 0x6002ABBB Win: 0×4470
[**] MISC-Attempted Sun RPC high port access [**]
06/18-01:08:37.009813 192.168.200.189:60198 -> 220.127.116.11:32771
TCP TTL:38 TOS:0×0 ID:33986
***F*P*U Seq: 0×0 Ack: 0×0 Win: 0×400 00 00 00 00 00 00
It is important to remember that intrusion detection devices work in conjunction with other security measures and are not a replacement for other good security practices.
When You’ve Discovered A Problem
If you believe that your system has been compromised, deciding what to do next can be difficult. How quickly you can respond to a possible security event, and what to do in the meantime, varies depending on the level of risk you’re willing to accept. For example, if tripwire reports an unauthorized change of /bin/login, you can’t trust any file on the system. This may impact your choice of what to do next.
The steps involved in recovering from an incident can be quite severe. Start with the following:
- Reboot into single user mode and disconnect from the network
- Backup any files that are not part of the core OS installation
- Backup all log and accounting files. If possible, make a complete copy of the system onto another disk before starting the investigation
- Reinstall the system from the original install media
Once you are able to recognize normal system activity, you will be better equipped to determine when something isn’t right. Awareness is the key here. Intruders are counting on their presence going undetected. Security is one place where an ounce of prevention really is worth a pound of cure.
Dave Wreski is an Internet security engineer and co-author of Linux Security HOWTO. He can be reached at firstname.lastname@example.org.