dcsimg

Hazy Computing

Today machines manage what we cannot. Are we dependent upon results or processes we do not understand?

Recently, there was an interesting article in the New York Times. The article brought up some fascinating issues about our reliance on computers — particularly in the world of finance. I touched on this briefly before and I think the article raises some good points. Basically, the “Wall Street geeks” or “quants” (quantitative analysts) develop sophisticated algorithms (evolutionary or Genetic Algorithms, GA’s) that package up securities with all the right attributes to make them attractive to other buyers. The interesting thing is that these types of algorithms produce results (in a sense “optimizations”) that people don’t really understand. In the past, I recall reading about an antenna designed using a GA. The result worked great, the design however was weird and in a sense ugly. The engineers did not have a full understanding of how it worked (i.e. Typical antenna design involves solving a set of equations for a given design.) Clusters will certainly enable more of this type of computing and this is were me must tread carefully.


Before we continue, I want to borrow one more thing from the Times article. Futurist Ray Kurzweil proposes this sneaky parlor trick. Given what we know about the progression of computer technology, consider the following statement:

But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. … Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Sounds a little like science fiction and I believe we already in this era. For example, would modern day banking, health-care, communication, to name a few, be possible without computers. If we turned off the banking computers mayhem would surely result. We already trust networks of machines that interact in complex ways. At the personal level, my Linux laptop manages the complexity required to view a web site, read email, and write an article, all with simple movement of my fingers.

I recall some of my first interactions with computers was writing BASIC programs. Initially, I was impressed with the number of digits the computer produced when I asked it to divide two numbers. This time period was just before pocket calculators, by the way. I trusted the result because mathematically it was right and had lots of numbers, but wondered why it stopped were it did. As I progressed in my education, I found myself using computers as tool to analyze real data. For instance, I might have calculated a standard deviation or do a least squares fit. It was pointed out to me that the amount of precision in my measurements limited the amount of precision in my calculations. This issue as we all learned (or should have learned) is called significant digits. All those extra digits were not just superfluous, they were wrong. That is, wrong in the sense that knowing this level of precision was not possible. This experience was my first encounter with what I called hazy computing. And, many of my early computer programs were just plain hazy in both a code and results kind of way.

Over the years, I have categorized hazy computing as that which pushes computers to produce results that have little or no basis for acceptance i.e. the results are mostly worthless and cannot be tested. I recall many a data set smoothed into a curve that fit the theory or results reported to ten significant digits. In the case of the antenna mentioned above, the result could at least be tested. The result may have been hazy but it worked. In the case of Wall Street financial instruments, many of the results were beyond the ken of the quants that created them — more like fog. And, they could not be tested before they were used. Casual use of anything you don’t understand is always dangerous.

As you, the HPC pioneer, continue to put more and more processing power into the hands of end users, it is important to step back and look at the big picture. Understanding the question is just as important as the result — which we all know is 42 in any case. In my opinion, hazy computing is similar to the GIGO (garbage in garbage out) mantra. And, each year the amount of available GIGO computing power continues to grow at a staggering pace. Today $2500 can buy you enough computing power to put you in the top 100 fastest systems of ten years ago (running HPL of course). The temptation will be to naively apply this cheap horsepower to all types of problems some of which will create nonsensical results with no basis for understanding. As the HPC mavens enabling the future, I invite you keep this in mind and be a skeptic.

And now, the parlor trick. The author of the above quote that seems to get everyone thinking was none other than Ted Kaczynski, a.k.a the Unabomber. Though crazy, his argument seems to hold some water. In case you don’t know, Kaczynski was responsible for planting/sending 16 bombs which injured 23 people and killed three. One of his targets who survived was none other than David Gelernter a developer of the Linda parallel computing language. The parallel version of Gaussian was built on top of Linda. It seems Kaczynski, did not really understand his own writing and instead chose to apply a hazy nonsensical solution to a future he so eloquently described.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62