As 2007 fades away, I thought I would reflect on some of the HPC events of the last twelve months. Having thought about it, though, nothing really stands out in my mind as a big breakthrough or new paradigm shifting technology.
As 2007 fades away, I thought I would reflect on some of the HPC events of the last twelve months. Of course there was plenty of news, new products, mergers, acquisitions, and all other kind of normal stuff that one would expect from any market. Having thought about it, though, nothing really stands out in my mind as a big breakthrough or new paradigm shifting technology.
Of course when you read the press releases they all seem to imply that the world will never be the same when you use some latest and greatest gizmo, software, or service.
Before the people who have worked so hard in various corners of the market say bad things about me, let me first say, Thank You. In my opinion, we are further along than we were last year at this time due to your efforts. I include everyone from the Linux driver writers, to the guys taping out multi-core processors. We are moving forward. This past year you could buy more FLOPS (and use less power) for your dollar than in the past.
My disappointment is based on my belief that HPC is still hard and will continue to be this way until we figure out how to create cost-effective turn-key software for this market.
The hardness is in part due to parallel programming and the multitude of ways in which one can decide to express the parallel execution in their code. Multicore is forcing the issue in the mainstream markets, but there still does not seem to be a single hilltop where one can plant their flag and say, “This is it, We start from here. And, our efforts will not be wasted.”
Perhaps, I suffer from a bad case of wishful thinking. Or just maybe, the lack killer applications in the HPC market is due to the lack of a clear direction (or two, or even three).
Of course, using MPI (Message Passing Interface), OpenMP, or pthreads, is a perfectly viable and hard way to create parallel codes. Indeed, the recent launch of our Multicore Cookbook is designed to help programmers get started quickly with multicore projects. I’m looking for something a bit easier, however. A solution that sits above the low level minutia of most parallel applications. And, yes I do have hope.
A recent IDC report covered the HPC server market growth in the third quarter of 2007. The most interesting finding was that over the last year or so, non-HPC server growth has been slowing down. If it were not for HPC systems, the entire server market would be shrinking. According to IDC, the growth of the HPC market has seen revenue growth of 20% over the last four years.
Revenue from clusters represented 68% of the overall HPC server revenue for the third quarter of 2007. The fastest-growing area is the work group segment for systems priced under $50,000, which is projected to have 11.4% CAGR through 2011.
One of the reasons for such growth was the lower entry prices that make HPC systems affordable for smaller organizations and business units. HPC is headed for the desktop and a there is huge opportunity for those who make it easy. Microsoft will have a play here as they seem to own the desktop at this point.
They also seem to recognize that they will have to solve the same problem we all face — parallel programming. Open software has a huge opportunity here as well. Open (source) efforts have proved to be an effective way to focus the best and the brightest on problems that are two costly for single entity to fund.
The Linux kernel is a perfect example of this kind of corporate cost sharing (without lawyers). And, in a similar vein because parallel programming is no cheap date, cost sharing seems like a good idea. Something to consider as 2008 unfolds.
In closing, when I think about 2007 a Russian saying comes to mind, “I wish things were better, but I am glad they are not worse.” Enjoy your holidays. We have more work to do next year.
Douglas Eadline is the Senior HPC Editor for Linux Magazine.