dcsimg

Digging Into The Top500

Who cares about the FLOPS. The real story is about who and how.

The Top500 is the favorite punching bag of many people in HPC. My beef is not with the Top500, it is with all those who make it out to be something it is not. In my opinion, it is good historical record of those machines that can run a single benchmark. There is of course certain bragging rights for landing on the Top500 and it may help people justify an expensive pile of hardware. I suppose getting your system listed on a web page is good thing™, but there are those who have real work to get done and can brag about minor things like application throughput, optimization, and utilization.

I have followed the list for the past several years. I think I can sum it as follows, more nodes, more cores, more InfiniBand, Blue Gene, bigger HPL number. Yawn. This year, however, I find the changes in the Top500 quite interesting. I’m not going to focus on top performance, but rather some trends in the list that seem to be changing.

There is a new champ. In and of itself this is often not very interesting, however, this time it was different in several ways. First, it was from China. The Tianhe-1A system at the National Supercomputer Center in Tianjin, achieved a performance level of 2.57 PFLOPS (i.e. it crunched more numbers than anyone else). China has been marching up in the Top500 ranking. This fall they had 41 total systems in the list. That is rather impressive considering two years earlier they had just 14 systems that made the cut. From what I hear, this is just the beginning.

The other notable difference is how they got to the top. Tianhe-1A is the first number one machine to use GPUs (NVidia). It also used a custom interconnect. Moreover, three of the the top systems used NVidia GPUs while a total of 28 systems on the list used GPU technology (NVidia, AMD, Cell). One other note about GPUs. This years Green500 had five GPU based systems (4 NVidia, 1 AMD). The Green500 ranks systems in MFLOPS/Watt.

I believe we will see more and more systems using GPUs — at least to get on the list. The GPU trend took a firm hold in 2010 and I expect it to continue in many verticals where “array processors” make sense. As I noted previously, I predict the GPU will migrate into the processor just like the co-processors of the past. GPUs for HPC computing are here to stay.

Moving on to other trends. Quad-core processors were used in 73% (365) of the systems, while 19% (95 systems) are already using processors with six or more cores. It is probably safe to say that the typical HPC node has at least 8 cores (2P motherboard with quad processors) and new systems will have at least twelve cores. Intel dominates the high-end processor market, with 79.6% (398) of all systems using Intel processors, although this is slightly down from six months ago (406 systems, 81.2%). AMD Opterons found their way into 57 systems (11.4%), up from 47 on the June list. AMD notes that 24 of the top 50 systems used their processors. IBM Power processor use is slowly declining and is now at 40 systems (8.0%), down from 42 previously.

Gigabit Ethernet (GigE) is still the most-used system interconnect technology (227 systems, down from 244 systems), due to its widespread use by industrial customers. The fact that GigE is “free” on the motherboard probably has something to do with this number. There were only seven systems sporting 10-GigE. As 10-GigE costs come down, it will be interesting to see if it gains in the list against InfiniBand (IB). Speaking of IB, 214 systems used InfiniBand on this list, up from 205 systems in the previous list. Interestingly, InfiniBand-based systems account for two and a half times as much performance (20.4 Pflop/s) than Gigabit Ethernet ones (8.7 Pflop/s).

In terms of operating systems, Linux continues to dominate. By my count, it was used on 449 systems. That is close to 90% of the list and probably the most dominate trend. I don’t expect this change in the near future. Windows HPC server 2008 was reported on five systems. Most notably, the Magic Cube Cluster at the Shanghai Supercomputer Center in China runs Windows. That system uses at least 2K nodes (possibly using using 4P motherboards). I think that is the largest Windows cluster I have seen to date. I’m not sure what advantage Windows brings to the party. Once you see a disk-less node boot in 20 seconds, you get the power of “open plumbing.”

In terms of application area, the leading sectors were “Not Specified” (34%) Research (16.4%), Finance (8.6%), Information Service (7.0%), and Geophysics (3.8%). Remember not all systems are on the Top500 and many are not specified in any case. There were even four clusters in the WWW category.

Finally, a question for the audience. There were 291 systems that had between 4K and 8K processors. That is a lot of processors. I am always curious how many of those are used at the same time by a single job. Surveys indicate that many applications don’t scale to more than 32 cores. Even the “big codes” may not exceed a thousand processors. The heroic codes can use thousands, but these applications are not very common. There were 9 systems with over 128K processors.

The question I always like to ask is; “Who uses 128K processors at once?” I usually don’t see that many hands go up in the air for that question. “How about 1000, or 100, or 50?” More hands start to go up. “Thank you. And now can you explain to me why you care about running Linpack on 128K processors?”

I never hear a good answer to that question.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62