You Are Not Supposed To Do That

Is the HPC market big enough to drive new products and processors?

Recently I read an interesting article by Michael Feldman over at HPC Wire. The article discusses Intel’s view on GP-GPU (aka NVidia and AMD/ATI) computing and how it does not plan to use the upcoming Larrabee as an HPC platform. Well worth a read, as I believe this year is going to be the bellwether year for GP-GPU computing.

As I read the article two thoughts floated into my mind (there is plenty of free space up there). First, the role of commodity or off-the-shelf products in HPC and second, HPC and future chip designs. Let’s look at the commodity play first.

Before I discuss commodity HPC, let me explain what I mean by commodity. The definition of commodity is a little tricky to nail down. In terms of HPC, I like to think of it as any product that is widely available and whose largest market is not HPC. Let’s consider the HPC server. Eight out of ten servers are sold for non-HPC uses (web servers mostly). The price of the two HPC servers benefit from the large demand in other market sectors. There is of course demand in the HPC area. As a mater of fact, until recently the demand has been quite high.

There is also the assumption that commodity means “available from multiple sources”. In practice, HPC solutions tend to get tied to the specifics of one technology, but the possibility of substituting another competing technology often exists. For instance, I can optimize for an Intel quad-core node and use this as my preferred design, but should I need to switch to AMD, the change would take some work, but not be as involved if I were moving to PowerPC or SPARC. The same goes for networking and storage. There is always the possibility substituting another commodity vendor for your current vendor. Nothing like keeping vendors honest.

In commodity HPC the Modus operandi seems to be, “Let’s take something that is mass produced and therefore cheap, and see if we can use it for HPC.” In addition to commodity servers, there are things like Ethernet, and storage sub-systems. One could argue that InfiniBand is a commodity interconnect as well. On the other hand, something like Myrinet, however, seems to be more HPC focused. Unless of course you are using Myrinet as a 10 GigE device. Note: I actually consider it a great thing that there are HPC focused companies.

Back in the day, before there was an HPC cluster market, some enterprising people tried using commodity hardware for HPC. They used what they could find “out there” and for certain applications, things worked quite well. Fast Ethernet was entering the market at a time when x86 CPUs were showing respectable floating point performance. Although you could swap out various components, there were some that worked better than others (e.g. The performance of Ethernet NICs could vary significantly.) The plan, however, was the same – use what we can find on-the-shelf and don’t incur the cost of special hardware. Of course, in cases where it was needed high performance networks were worth the extra cost. Myrinet was a good example of the “interconnect value add” proposition.

While all this re-purposing was going on, the efforts were largely ignored by the big vendors and in some cases considered a “hobbyist fad” efforts. From a marketing prospective, the commodity HPC market was created by a bunch of people tinkering with technology they were not suppose to be using. They were supposed to be buying these big shiny Supercomputers for HPC. Good thing they did not listen.

Jump ahead to the GP-GPU revolution. Like the original commodity HPC efforts, the GP-GPU efforts came from people using products in a way they were not “suppose to use them.” Some of the manufacturer’s of these products have recognized the HPC potential (NVidia and AMD/ATI) and are selling/packaging them as an HPC product. I doubt they could support the market on HPC alone as much of the development effort is directed toward the commodity video card market. I would not dismiss these efforts to use commodity video hardware for HPC. Indeed, unlike the old days, companies are now supporting these efforts with products and software tools. It is refreshing to hear “Take a tool-set and have at it”, rather than “That hardware was not designed for HPC so we are not going to stop the BIOS from asking you to press F1 if your keyboard in not present.”

The second issue that crossed my mind was the role HPC now plays in the development of new computer products. There are now many HPC focused companies that sell both hardware and software. These companies did not exist in the early days. The smaller companies seemed to grow based on the needs of the market. For instance, there is need for batch processing systems on clusters. Existing companies adapted, other were formed around open source projects to fill the need. In the case of processors HPC needs to be addressed at the design stage. I question whether the HPC market has enough sway to “get what it needs” from AMD and Intel as these companies need to be looking at the bigger market. This situation is not unlike NVidia and AMD/ATI, they have to balance the HPC needs with larger market needs.

So where am I going with all this? It seems to me that HPC is still a small fish in big pond. There are companies making (maybe eking out) a living in the HPC space alone, but most bigger companies have much larger primary and even secondary markets. As I mentioned, the commodity HPC model seems to be “find what works and run with it.” I think those companies who understand and support this model, while addressing the bigger markets, will be well rewarded.

Comments on "You Are Not Supposed To Do That"


In order for HPC to fully take advantage of commodity products, some cherished HPC traditions might need to be reconsidered. For example, not only Fortran, but MPI and OpenMP as well.

I work for a commercial vendor of a developer product specifically focused on high performance data intensive analytic application development.

We exhibited at SC’08 and had fun, but essentially, we were met with disbelief when the attendees realized that we were proposing that they use Java on 32-core SMP boxes. Java? SMP? Heresy!



As said in http://www.linux-mag.com/id/2292, MPI and OpenMP can have their limitations and should be replaced by something better.

Maybe I am a being conservative here, but Java has some limitations to be accepted in HPC:

There a lot of disbelief due to be a interpreted language. I know that most of the time JITs can achieve good performance, but they can’t do heavy optimizations. Perhaps if someone make a vectorizing JIT, people will take Java more seriously.

Sometimes explicit memory management can achieve huge impact on performance.
In this paper they show that a GC program runs as fast as explicit managed memory if the machine had three times more memory [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=]. Most people wold think “ok, just buy more memory”, but HPC people think “Given a cluster with X GB of memory, what is the speedup if I use explicit managed memory?”


Hello! This is kind of off topic but I need some help from an established blog. Is it hard to set up your own blog? I’m not very techincal but I can figure things out pretty quick. I’m thinking about making my own but I’m not sure where to begin. Do you have any ideas or suggestions? With thanks


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>