SandForce 1222 SSD Testing, Part 1: Initial Throughput Results

SandForce has developed a very interesting and unique SSD controller that uses real-time data compression. This can improve performance (throughput) and extend the life of the SSD but it hinges upon the compressibility of your data. This article is the first part in a series that examines the performance of a SandForce 1222-based SSD and the impact of data compressibility.

SandForce is quickly becoming one of the dominant companies in the SSD controller world. They comes in a range of SSDs from consumer SSDs such as the one discussed in this article to enterprise level SSDs. The really cool thing in their controllers is that they use real-time data compression to improve both performance and increase the longevity of the SSD. They key to this working is the compressibility of your data. If your data is compressible then you can get a performance boost as well as a longevity boost. But the question is how much performance is affected but the compressibility of the data.

In this article, I took a consumer SSD that uses a SandForce 1222 based SSD and ran some throughput tests against it using IOzone and data with different compressibility levels. I’ve used IOzone before for throughput testing but for this particular testing, I used the ability of IOzone to vary the level of dedupability or compressibility of the data.

I ran 13 different I/O tests using IOzone: write, re-write, random write, record rewrite, fwrite, frewrite, read, reread, random read, backwards read, stride read, fread, and freread. Each test was run 10 times with four different record sizes, (1) 1MB, (2) 4MB, (3) 8MB, and (4) 16MB. Also, three different data compression levels were tested: 98% compressible, 50% compressible, and 2% compressible. The average and standard deviation of each test is reported graphically.

All of the write tests with the exception of the record rewrite exhibited the same general behavior. More specifically:


  • As the level of compressibility decreases, the performance drops off fairly quickly
  • As the level of compressibility decreases, there is little variation in performance as the record size changes (over the record sizes tested).

The absolute values of the performance varied for each test, but for the general write test, the performance went from about 260 MB/s (close to the rated performance) at 98% data compression to about 97 MB/s at 2% data compression for a record size of 1 MB.

In the case of record rewrite,the average performance actually improved as the compressibility decreased but the standard deviations for a record size of 1MB were quite large. In addition, the throughput was smaller than for the write or re-write tests. And finally, the performance drops off fairly significantly when going from a record size of 1MB to 4MB.

All of the read tests exhibited the same general behavior. Specifically,


  • The performance drops only slightly with decreasing compressibility (dedupability)
  • As the level of compressibility decreases, the performance for larger record sizes actually increases
  • As the level of compressibility decreases, there is little performance variation between record sizes

Again, the absolute performance varies for each test, but the trends are the same. But basically, the real-time data compression does not affect the read performance as much as it does the write performance.

The key observation you should take from these results is that the performance varies with the level of data compression. If the data is compressible then you will get pretty amazing performance from an inexpensive SSD and if you data is compressible as granite, then you may not get the best performance from a SandForce controlled SSD. The key question for you is how compressible is your data? That question can be difficult to answer but I think you would be surprised by how compressible data can be. Remember that you aren’t concerned with the overall compressibility of a file, but rather how compressible a chunk of the data can be.

SandForce has added another “knob” to the ability of tuning storage solutions – real-time data compression in the storage device (SSD). While we can’t really control turn this knob to improve performance of change the behavior of the overall storage solution, it can be used in an overall design to improve performance and longevity of SSD storage solutions. I think of this as an opportunity. We can study our data to look for opportunities that it can be compressed efficiently or we can change our applications to take advantage of SandForce controllers.

In the next article I’ll present some IOPS testing of the same SSD. Stay tuned.

Table of Results

The first two tables of results are for the 1MB record size. Table 1 below presents the throughput in KB/s for the file systems for the 6 write tests for all three data compression levels.

Table 1 – IOzone Write Performance Results with a Record Length of 1MB, a File Size of 16GB, for all three data compression levels

Compression Level Write
KB/s
Re-write
(KB/s)
Random write
(KB/s)
Record re-write
(KB/s)
fwrite
(KB/s)
frewrite
(KB/s)
98% 259,867.10
2,288.98
262,817.50
1,837.59
240.748.80
6,154.25
948,460.10
11,794.89
260,159.60
3,750.41
262,224.60
1,912.10
50% 128,515.40
565.47
127,350.30
439.92
122,132.30
602.71
1,239,838.00
175,767.76
127,065.50
594.64
127,008.10
320.65
2% 97,113.50
753.56
95,722.80
596.64
92,497.20
419.70
114,5650.00
167,156.10
97,433.20
496.51
95,591.70
640.73

Table 2 below presents the read throughput in KB/s for the configurations for the 7 read tests for a record length of 1MB for all three data compression levels.

Table 2 – IOzone read Performance Results with a Record Length of 1MB, a File Size of 16GB, for all three data compression levels

Compression Level Read
(KB/s)
Re-read
(KB/s)
Random read
(KB/s)
Backwards read
(KB/s)
Strided read
(KB/s)
fread
(KB/s)
freread
(KB/s)
98% 225,287.40
295.71
225,667.40
246.14
223,668.40
229.79
232,015.00
6,527.68
264,430.40
6,234.18
228,108.20
331.11
228,383.50
400.09
50% 201,998.30
447.60
206,083.00
2,686.91
210,945.20
410.51
228,791.10
8,177.77
248,220.90
1,031.81
203,992.90
414.03
210,024.70
2,425.19
2% 191,991.90
362.27
194,613.60
1,517.10
202,468.70
331.52
218,469.80
9,711.30
236,463.90
6,828.49
193,270.60
69.28
197,003.20
2,113.24

Table 3 below presents the throughput in KB/s for the file systems for the 6 write tests for all three levels of data compression for the case of a 4MB record ize.

Table 3 – IOzone Write Performance Results with a Record Length of 4MB, a File Size of 16GB, for all three data compression levels

Compression Level Write
KB/s
Re-write
(KB/s)
Random write
(KB/s)
Record re-write
(KB/s)
fwrite
(KB/s)
frewrite
(KB/s)
98% 259,636.00
2,876.35
264,365.50
284.30
261,707.50
2,231.57
489,360.30
8,543.12
260,444.30
1,031.41
263,937.70
268.36
50% 128,393.70
632.91
126,700.10
391.06
123,144.50
340.23
712,719.60
38,566.16
127,828.80
286.79
126,818.40
312.61
2% 97,854.40
546.40
96,097.30
562.56
94,784.10
423.01
749,340.20
49,796.75
96,312.00
3096.89
95,834.60
679.52

Table 4 below presents the read throughput in KB/s for the configurations for the 7 read tests for a record length of 4MB for all three levels of data compression.

Table 4 – IOzone read Performance Results with a Record Length of 4MB, a File Size of 16GB, for all three levels of data compression

Compresion Level Read
(KB/s)
Re-read
(KB/s)
Random read
(KB/s)
Backwards read
(KB/s)
Strided read
(KB/s)
fread
(KB/s)
freread
(KB/s)
98% 185,799.20
446.84
186,003.20
380.94
196,870.50
452.72
212,707.60
580.82
220,705.40
367.90
187,464.20
466.91
187,539.90
422.39
50% 200,175.80
484.83
206,240.60
2,083.58
216,113.40
415.51
237,721.20
376.93
250,362.10
393.04
200,846.10
403.95
206,420.10
1,688.87
2% 191,830.20
1,853.31
194,658.60
1,879.87
210,661.20
322.21
234,742.40
370.98
240,858.80
4,520.73
192,249.70
180.95
196,327.50
2,310.93

Table 5 below presents the throughput in KB/s for the file systems for the 6 write tests for all three levels of data compression.

Table 5 – IOzone Write Performance Results with a Record Length of 8MB, a File Size of 16GB, for all three levels of data compression

Compression Level Write
KB/s
Re-write
(KB/s)
Random write
(KB/s)
Record re-write
(KB/s)
fwrite
(KB/s)
frewrite
(KB/s)
98% 201,435.20
12,910.26
197,160.70
4,216.48
196,540.40
3,818.25
409,745.50
4,134.56
183,740.30
2,396.45
174,528.60
3,947.52
50% 127,852.10
1,010.09
124,798.20
309.14
124,091.80
301.98
628,919.00
10,397.27
127,133.10
229.97
125,422.00
285.08
2% 98,201.40
388.01
96,797.60
277.94
95,664.20
526.36
643,606.90
14,595.09
97,800.80
628.66
95,982.70
715.15

Table 6 below presents the read throughput in KB/s for the configurations for the 7 read tests for a record length of 8MB for all three levels of data compression.

Table 6 – IOzone read Performance Results with a Record Length of 8MB, a File Size of 16GB, for all all three levels of data compression

Compression Level Read
(KB/s)
Re-read
(KB/s)
Random read
(KB/s)
Backwards read
(KB/s)
Strided read
(KB/s)
fread
(KB/s)
freread
(KB/s)
98% 160,866.10
548.27
161,000.70
440.51
170,620.80
3,620.09
184,403.80
3,036.95
188,468.50
435.83
161,329.20
398.43
161,882.50
843.89
50% 192,996.90
412.83
196,348.20
1,631.00
213,035.20
414.83
231,573.20
732.80
239,764.00
343.25
193,825.90
221.58
196,757.30
1,563.09
2% 191,435.30
531.57
196,460.60
3,050.22
213,130.30
734.09
234,906.50
1,173.67
242,374.50
3,024.55
192,102.20
318.33 /font>
195,679.20
1,954.18

Table 7 below presents the throughput in KB/s for the file systems for the 6 write tests for all three levels of data compression.

Table 7 – IOzone Write Performance Results with a Record Length of 16MB, a File Size of 16GB, for all three data compressions

Compression Level Write
KB/s
Re-write
(KB/s)
Random write
(KB/s)
Record re-write
(KB/s)
fwrite
(KB/s)
frewrite
(KB/s)
98% 187,376.40
7,335.08
194,595.00
8,631.12
194,275.40
8,376.24
407,065.80
2,294.84
194,750.80
8,682.39
197,457.20
942.71
50% 124,560.80
975.37
122,228.00
900.37
122,249.00
528.17
633,716.50
10,578.84
124,419.70
1,007.73
122,483.20
1,215.24
2% 98,061.10
599.59
96,510.70
529.34
95,636.40
307.13
647,510.20
6,295.92
97,745.10
789.20
95,864.40
852.17

Table 8 below presents the read throughput in KB/s for the configurations for the 7 read tests for a record length of 16MB for all three levels of data compression.

Table 8 – IOzone read Performance Results with a Record Length of 16MB, a File Size of 16GB, for all three levels of data compression

Compression Level Read
(KB/s)
Re-read
(KB/s)
Random read
(KB/s)
Backwards read
(KB/s)
Strided read
(KB/s)
fread
(KB/s)
freread
(KB/s)
98% 159,715.40
426.48
160,472.20
1,121.55
173,767.70
1,067.26
183,805.10
1,015.83
187,475.80
1,190.98
160,328.00
421.81
161,214.00
667.26
50% 191,687.90
668.67
193,903.30
1,657.78
215,117.90
1,050.15
231,211.20
670.64
239,107.40
1,195.25
192,657.80
459.52
195,210.80
1744.49
2% 191,272.80
324.34
195,085.40
1,876.13
216,352.30
938.97
234,983.00
1,228.71
241,631.10
958.78
192,134.80
223.96
195,617.60
1,600.22

Jeff Layton is an Enterprise Technologist for HPC at Dell. He can be found lounging around at a nearby Frys enjoying the coffee and waiting for sales (but never during working hours).

Comments on "SandForce 1222 SSD Testing, Part 1: Initial Throughput Results"

solanum

Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)

Reply
    laytonjb

    Stay tuned! That is my plan for the last part in this series.

    The next part will cover initial IOPS performance. Part 3 will cover a more in-depth throughput study (and comparison to an Intel SSD). Part 4 will do the same more in-depth study and comparison but for IOPS. Then Part 5 will compare the 2.6.32 kernel to the latest kernel (probably 2.6.37 but maybe 2.6.38 if it comes out).

    Jeff

    Reply
pjwelsh

Ahh forget 2.6.37! Add the mainline kernel tracker repo with version 2.6.38 (currently) from the GREAT folks at ElRepo

Reply
sdanpo

Excellent article!

I liked the thoroughness of the test and the great data derived form it.
Looking forward to the coming parts.

Disclaimer : The comment is written by an Anobit Employee.
Anobit is an Enterprise SSD vendor with Data-pattern-Agnostic behavior.

Reply
storm

I’ve been reading about and testing SSDs for years and am finally leaving my first comment. I’m doing so because none of the benchmarks I’ve read test the achilles heal of SSDs which happens to be our production workload.

I would suggest doing a mixed random read/write workload with a 64GB file (full extend of the drive) with 4k write size that runs for a long time to arrive at steady state behavior, e.g., a day. When I was working with their engineers when beta’ing the FusionIOs IOdrive, they said this is the most toturesome workload they’ve ever seen. They had to make a number of changes to the driver for us as a result. Caches get quickly overwhelmed, wear leveling/grooming quickly gets pinned shuffling blocks around and can lead to huge periodic drops in performance unless they are amortized over time (SSDs are over provisioned under the hood to help with this), block aligning/elevator algorithms don’t help due to the randomness, the small IO size kills throughput, the mixed nature of the IOPS r/w (especially when done in parallel) can cause havoc with the rewrite algorithm, etc.. The dirty little secret in the industry is to quote inflated random IOPS performance using a file that is 1/4-1/3 the size of the drive.

Another surprise that we’ve found during testing is how drives perform as you increase the number of parallel read/write threads. With Fusion, for instance, it doesn’t make much of a difference positively or negatively. Virident’s tachIOn drive, however, tripled in performance! We were blown away. FYI this is the best SSD we’ve tested to date.

Ok, that was cathartic :) Thanks for letting me rant.

Thanks for the great article and I look forward to the rest.

Reply
detroitgeek

I have been looking at SSD to put my OS on, and plan on having my home directory on a standard drive. I worry about the lifetime of the SSD under these conditions because of all of the writing the OS does. My /var/ directory would also be on a standard drive. Is my concern realistic?

Reply
eoverton

I had issues with my drive going off-line randomly. Seems it was a bios issue. But which bios? see http://ssdtechnologyforum.com/threads/835-Sandforce-SSD-Firmware-Version-Confusion. So I ugraded my bios from Adata site. The drive does not have the issue anymore.

Reply
    laytonjb

    Was this the same MicroCenter drive that I tested? I was told it was an Adata drive but I haven’t been able to confirm that.

    What were the symptoms of the drive going off-line? What distro/kernel were you using?

    Thanks!

    Jeff

    Reply
      eoverton

      I was using windose at that time :(. The drive would go off-line and I would get BSD or sometimes would reboot and halt at “Could not find bootable drive”. I would power off, wait! Then power on, everything was then ok. I confirmed mine was Adata by small manual the drive came and googling. The issue did not look like it was OS related.

      Reply
venugopalan

This article has given very nice heads up on IOPS & SSD controllers.

Reply

Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)FiberYes

Reply

Leave a Reply to detroitgeek Cancel reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>