The world of Linux storage tools for both monitoring and management just stinks. But that doesn't mean that there is absolutely nothing to help with monitoring. One such tool, iostat, can be used to watch what your storage devices are up to.
In general you get two type of reports with iostat (you can use options to get one report or the other but the default is both types of reports). The first report has CPU usage and the second report is the device utilization report. The CPU report contains the following information:
%user: Shows the percentage of CPU utilization that occurred while executing at the user level (this is the application usage).
%nice: Shows the percentage of CPU utilization that occurred while executing at the user level with “nice” priority.
%system: Shows the percentage of CPU utilization that occurred while executing at the system level (kernel).
%iowait: Shows the percentage of time that the CPU or CPU’s were idle during which the system had an outstanding disk I/O request.
%steal: Shows the percentage of time spent in involuntary wait by the virtual CPU or CPU’s while the hypervisor was servicing another virtual processor.
%idle: Shows the percentage of time that the CPU or CPU’s were idle and the systems did not have an outstanding disk I/O request.
Some of these values should be fairly familiar to you. The values are computed as system-wide averages for all processors when your system has more than 1 core (which is pretty much everything today).
The second report is all about the device utilization. It prints out all kinds of details about the device utilization (can be a physical device or a partition). If you don’t use a device on the command line, then iostat will print out values for all devices (alternatively you can use “ALL” as the device). Typically the report output includes the following:
Device: Device name
rrqm/s: The number of read requests merged per second that were queued to the device
wrqm/s: The number of write requests merged per second that were queued to the device
r/s: The number of read requests that were issued to the device per second.
w/s: The number of write requests that were issued to the device per second.
rMB/s: The number of megabytes read from the device per second.
wMB/s: The number of megabytes written to the device per second.
avgrq-sz: The average size (in sectors) of the requests that were issued to the device.
avgqu-sz: The average queue length of the requests that were issued to the device.
await: The average time (milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
svctm: The average service time (milliseconds) for I/O requests that were issued to the device.
%util: Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this values is close to 100%.
These fields are specific to the set of options used as well as the device.
It is also important to remember that the first report generated by iostat provides values (statistics) using the time interval since the system was booted. All subsequent reports use the time interval that you specify.
If you go back to the example you will see that the first report (the combination of the first CPU report and first device report) have non-zero values. Remember that this first report computes values based on the system since it first booted. The reports after the first one show little if any activity on /dev/md0 but you do notice some differences in the CPU reporting.
Let’s take a look at more interesting example – running iostat while the system is running iozone.
IOstat Example with IOzone
The first example was pretty boring with nothing happening. Let’s try using iostat when running iozone to show something more interesting (helps with understanding iostat).
Before starting iozone I started iostat with the following command:
[laytonjb@test64 IOSTAT]$ iostat -m -x /dev/sdd 1
The options I used are:
I’m using extended output (“-x”) to get more output
I want the output to be in megabytes (“-m”) rather than blocks
I’m examining the device, /dev/sdd which is an Intel X25-E SSD (it rocks)
Finally the last input “1″ tells iostat that I want a report every second and I want it to go on indefinitely until I interrupt it (the indefinitely comes from the fact that I didn’t give it a second input after the “1″).
At first the output is pretty boring since I haven’t started iozone.
Notice that the read throughput (“rMB/s”) is zero as is the write performance (“wMB/s”). There are also no read requests (“r/s”) nor write requests (“w/s”). Finally the CPU load is very load (less than 2%) and there are iowaits happening.
The output below shows what happens when iozone is started.
At first the CPU usage is very low and there are no iowaits. Then the second report shows some iowaits (16.46%) which keeps increasing. You can also see the system CPU utilization increasing steadily (this is a 4-core AMD Opteron system). Then part way through the output you see the write throughput (“wMB/s”) start at 0.10 MB/s and then kick up to 144.06 MB/s and it continues increasing. During this time, the read throughput is zero. This shows the beginning of the first iozone test which is a write throughput test. You also see the number of write requests (“w/s”) increasing greatly when the write throughput increases. The same is true for the average size (in sectors) of the requests to the device (“avgrq-sz”) and the average queue length for /dev/sdd (“avgqu-sz”).
One interesting statistic that I like to examine is “%util”. This will tell me if the underlying device is approaching saturation or 100% utilization. In the case of the write throughput test the %util stays low at first then quickly reaches 100%, indicating, you guessed it, that we have saturated the device.
We can go forward through the iostat output to find the same kind of output for reading.
Notice the transition from write throughput (“wMB/s”) to read throughput (“rMB/s”). What is also interesting is the percentage of iowaits goes down once the read throughput test has started. I believe this is because SSD’s have amazing read performance. Also notice that the average queue length for /dev/sdd (“avgqu-sz”) goes down when switching over to read testing. Again, I believe this is because of the amazing read performance of SSD’s. However, the %util for the device is 100% during the read testing.
There are many ways you can use iostat for monitoring your storage server. For example, a simple way is that if your storage server load starts going up, you can use iostat to look for the offending device. This can be done by using the keyword “ALL” in place of the specific device so that you can find the offending device or devices. You can have iostat generate a report every second for a few minutes and see which device (or devices) has the problem. During this time you should examine the throughput as well as the queue information. Plus I like to examine the “%iowait” output to see if the system has a large back log of I/O operations waiting. I can couple this with the information in %util to determine if the device is saturated and how much data is in the queue.
I think it is fairly obvious that there is a severe lack of good good, integrated tools for managing and monitoring storage in Linux. As misery loves company, this is true of storage in general. Given that data is growing extremely fast, one would think that it would be logical for there to be a good set of tools for storage, but that obviously isn’t the case.
While I am ranting about the horrible state of storage tools, I also recognize that there are some tools out there that are pretty much the “fingers in the dike” for storage management/monitoring and we must at least understand how to use them.
IOstat is one of the tools that comes in the sysstat package. It can provide some useful information about the state of storage servers both from a CPU perspective and an underlying storage device perspective. It can give you a reasonable amount of data including the number of I/O operations and throughput. It also can give you information on the state of the I/O queues which helps you understand if the there is a great deal of data remaining to be serviced or if the queues are lightly loaded indicating that the storage devices are keeping up with the workload.
So when your storage servers are greatly loaded and/or storage performance is suffering, iostat can help you begin to uncover what is happening with your system.