The SuperComputing Conference is THE international conference and expo for all things HPC (High Performance Computing). The astute attendee of this year's conference could see that storage is a big part of this year's show. Two major storage trends from this year's conference: really fast storage and really dense storage.
Arguably one of the fastest tier-0 storage devices at SC10 was from Kove. In the Mellanox booth they showed their xpress disk (abbreviated XPD) DRAM based storage unit that is a 4U high chassis. It has a capacity of up to 1TB and can use either Fibre Channel cards (2, 4, or 8 Gbps) or InfiniBand cards (up to QDR) for connectivity. Kove says that it can hit a throughput of 21.5 GB/s and at SC10 they announced that they achieved 20 GB/s using six Mellanox ConnectX-2 IB cards to an IB fabric with 11 compute nodes. Kove also says that they can reach 600,000 read IOPS and 500,000 write IOPS continuously (i.e. not burst rates).
From the variety of vendors and technologies for tier-0 storage, one can easily see that the market demand for this technology is growing. However, thee are no really perfect solutions so there have to be problems with new technologies.
There are Problems Right Here in River City
With both storage trends, high density storage and tier-0 storage, everything is not ideal. There are some issues with deploying these technologies and there are also some potential issues that these technologies cause. Dense storage devices have several issues that affect customers now but they also have an upcoming issue due to regulatory changes that are originating in the European Union (EU).
The first issue facing dense storage units is the physical size and weight of the units. To get increase the storage density manufacturers have had to increase the length of the chassis. As you can tell in Table 1, some of the units are almost 100 cm long (1000 mm), but typical data center racks are only 900 mm deep. Consequently, you have to use much deeper racks, typically 1200mm racks, to house these storage units. This may not be a problem when the storage units are bought with racks, but for situations where existing racks are to be reused, then many of these dense storage units cannot be used.
The second issue is the weight of these units when loaded. Imagine trying to remove a storage unit that weighs close to 240 lbs. when it is at the top of the rack. The only way to remove it is to pull all of the drives out and then use a lift of some sort to pull out the unit from the rack. You also have to worry about pulling out a storage chassis that is at the top of the rack to service a drive. Without a well anchored rack, you could easily tip over the entire rack.
The third issue that dense storage technologies are facing are changing power and cooling requirements. Recently, the European Union (EU) Code of Conduct has added requirements for chillerless operations of data center equipment. These requirements are based on ETSI EN 300 019-1-3 Class 3.1 thermal envelope (see Item 4.1.3 on page 13) with the following requirements:
Table 3 – EU Chillerless requirements
10° to 35° C
5° to 40° C
-5° to 45° C
As part of the standard the data center equipment should be able to operate in the middle range all of the time (5° to 40° C) and operate in the top range (-5° to 45° C) approximately 1% of the time. Currently, these requirements are not part of EU country requirements. However, it has been recommended that EU government contracts for data center hardware uses these new chillerless standards in 2011. It is then expected that by 2012 or 2013 that all data center hardware contracts in the EU will require adherence to these enhanced standards. Moreover, ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) is looking to adopt similar requirements and is likely to just adopt the EU Code of Conduct.
Currently, most, if not all, of the dense storage devices cannot meet these new requirements. Consequently, it is anticipated that these units may have to be redesigned, possibly making them longer (allowing for more cooling air), which impacts the size of the rack, or they may have to have their density decreased. Keep an eye out for changes to high density storage devices in 2011.
The second technology discussed in the article, tier-0 storage devices, has been around for some time, but only now is starting to see even more widespread usage because of the massive increases in data generation in HPC. However, everything is not as easy as slapping a device into a rack and calling it a day. There are other factors that must be considered in employing these devices.
Your first consideration should be how you get the storage performance from the device to the client. In other words, how is the tier-0 device connected to the network? A simple GigE connection is definitely out of the picture, but even a single InfiniBand connection is out of the picture as well. As demonstrated with the Kove storage device, you may need multiple InfiniBand connections to fully utilize the throughput of the underlying storage device.
Because tier-0 storage devices are so fast, you need a correspondingly fast file system (no more ext3). Compounding the selection of a file system is the fact that the tier-0 storage technology is typically used for solving very difficult I/O problems that have potentially really terrible I/O patterns (i.e. non-file system “friendly”). For example, you might have an application that is reading and writing millions of small files to a single directory and using an “ls” command as part of the application to determine when the files have fully reached the storage media. Even with fast storage devices, a bad choice for a file system can turn that expensive ramdisk based storage device into a 5,400 rpm USB SATA drive. So you need to understand your application’s I/O pattern and the resulting requirements and be able to select a file system that best meets those requirements.
The SuperComputing conference/expo is a great place to learn about the cutting edge of storage technology. You get to see some of the fastest, most scalable, and most dense storage solutions on the market today. Understanding the underlying technologies is important since, in some cases, these technologies can become mainstream in just a few years.
At SC10, two storage technologies, very dense storage devices, and very fast storage devices were the biggest technology trends, at least in this author’s mind. Hopefully this article has pointed out some solutions based on these technologies as well as some of the limitations of solutions built on these technologies.