Resilience On Wall Street

A journey into the heart of Wall Street HPC. It is still ticking.

As they say, timing is everything. This past Monday (Sept 22), I attended the 5th annual High Performance on Wall Street Conference and Show. The event is a one day affair in downtown Manhattan. I usually attend this show each year and have noticed it grow in the number of visitors and vendors. This year, of course, it was a bit off. In case you have been living under a rock, the week before was one of the most traumatic in Wall Street history. And, like most I believe we are no way near the end of a long dark tunnel. From my casual conversations and estimates, there seemed to be more vendors than attendees at times. Thankfully, the show was not deserted as some may have predicted and there was still plenty of activity.

Before I talk about a few show highlights, I wanted to point something out. As mentioned, the event is called “High Performance on Wall Street”. A good question to ask is “And, just what does Wall Street do with high performance computing?” In a word, they calculate risk. Yikes, you mean with all those clusters and they did not see this coming? Sure they did, everyone did, they knew the “what”, but not “when.” And, as I hope some will learn, calculating risk is not the same as ignoring risk.

Much of the Wall Street woe is attributed to sub-prime mortgages. A polite way of saying high risk loans. Clusters and HPC have nothing to do with these loans. There is no need. Common sense and good guidelines are all you need here. Once the mortgages are granted however, the Wall Street movers and shakers package them up as “investment instruments” and try to sell them to the other movers and shakers on Wall Street and around the world. Since these “instruments” are derived from other investments, they are often called derivatives. As a side note, the word “derivative” in the financial community is used to describe these instruments and is different than a mathematical derivative found in calculus. Although, mathematical derivatives are used to determine price and risk of financial derivative instruments. Aren’t you glad I cleared that up.

If you are a mover and shaker and you want to sell/buy a derivative, then you need to know how much to charge/pay. Enter clusters. Calculating price and risk of derivative investments is hard to do exactly because as we all know there is a non-trivial amount of randomness in financial markets. To solve these an other equations a Monte-Carlo approach is often used. Basically, because a closed solution is not possible, large number of scenarios are calculated for each derivative and eventual an average value will emerge. In other words, it is all a guess.

Let’s recap. Write high risk loans. Bundle together a bunch of high risk loans. Run a bunch of computer models to get the price and level of risk you want. Feel good about your product. Buy or sell the investment instrument. Profit. Repeat. What could possibly go wrong? How easy we forget the golden rule of computers GIGO. (In case you have been living under two rocks, that is Garage In Garbage Out) And, lest we forget Doug’s Mover and Shaker rule: Move over here because I want to shake you and ask what were you thinking?

There is an old lesson here. Asking the right question is as important as the answer. Clusters will continue to extremely useful in finance. In the case of packaging bad loans, they did what they were told.

Let’s get back to the show. Aside from current market events, the show was very upbeat. Vendors introduced new products and there was plenty of discussion. Perhaps the biggest news was the introduction of Microsoft Windows HPC Server 2008. The new version is offering improved productivity for system administrators (i.e. it is easy to install and run). In terms of performance, this version now includes NetDirect, a new user-space RDMA (Remote Direct Memory Access) interface for high speed low latency interconnects. There are other new and improved features as well. Head over to Microsofts HPC site to find out all the Redmond HPC goodness.

Before I move on, I also wanted to call your attention to one of Microsoft’s efforts. Like me, I assume there are people who toss and turn over the whole multi-core, cluster, parallel programming thing. Hey at least I’m not alone. It seems Microsoft is moving on a front that I total support — functional programming. If you recall, I am a big advocate of this approach as a means of abstracting parallel minutia from the programmer. Microsoft has been developing the F# language for just this purpose. Take a look if you are curious.

The other big introduction was from Cray. In move that may portend things to come, Cray has introduced a small desk-side cluster called the CX1 Supercomputer. It is a small foot print bladed cluster chassis and can handle up to eight blades with various functionality. For instance, you can use a combination of Intel Xeon quad-core compute blades, storage blades, or visualization/GPU compute blades. Units can be combined as well. Gigabit Ethernet and InfiniBand are the avaiable interconnects. The whole works fits in a nicely engineered and attractive case — with wheels! Support for both Red Hat RHEL5 and Windows HPC Server 2008.

IBM was there showing products on several fronts. I have had an interest in green products of late, which brought me to the iDataPlex full size poster. This full rack system has been designed to be green from the ground up. While there was no room for a full system, there were blades avaiable for inspection. I also noticed that the blades used an ASUS motherboard and not a special blade board, which should help keep the cost down. I was also told that if you can run chilled water to the cabinet door, you can place these chassis almost anywhere as the cooling can be self contained. I’ll have more on this next month when I take another field trip into New York City. There was also a representative from Moab demonstrating Green Computing through intelligent scheduling at the IBM booth. That is, the scheduler can place nodes in a sleep mode or even power off the node if they are not being used. They also support temperature based workload scheduling.

There was plenty more to see, but I’m out of space for this week. If you were interested in any of the talks/panels the High Performance on Wall Street team has placed slides on the site in the past and hopefully they will do the same this year. In closing, it seems Wall street took a few on the chin over the last few weeks. From the mood of the show, it seems some players are down, but not everybody is out. High Performance is still very much part of the game.

Comments on "Resilience On Wall Street"


I think that the correct questions were ignored intentionally , not just left out of the equation. There was money to be made. What was not seen were the losses.

Many regulations in place since the last depression were removed. Intentionally, with malice aforethought.

Can’t blame your computational models/equipment for ignoring risk. Its called greed.


Hello there, I discovered your site via Google at the same time as looking for a comparable matter, your website came up, it appears good. I have bookmarked it in my google bookmarks.


I’m not sure why but this web site is loading incredibly slow for me. Is anyone else having this problem or is it a problem on my end? I’ll check back later on and see if the problem still exists.


My brother recommended I might like this website. He was entirely right. This put up truly made my day. You can not consider just how so much time I had spent for this info! Thank you!


Resilience On Wall Street | Linux Magazine. Thanks in support of sharing such a fastidious opinion, piece of writing is nice, thats why i have read it entirely


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>