dcsimg

Hazy Computing

Today machines manage what we cannot. Are we dependent upon results or processes we do not understand?

Recently, there was an interesting article in the New York Times. The article brought up some fascinating issues about our reliance on computers — particularly in the world of finance. I touched on this briefly before and I think the article raises some good points. Basically, the “Wall Street geeks” or “quants” (quantitative analysts) develop sophisticated algorithms (evolutionary or Genetic Algorithms, GA’s) that package up securities with all the right attributes to make them attractive to other buyers. The interesting thing is that these types of algorithms produce results (in a sense “optimizations”) that people don’t really understand. In the past, I recall reading about an antenna designed using a GA. The result worked great, the design however was weird and in a sense ugly. The engineers did not have a full understanding of how it worked (i.e. Typical antenna design involves solving a set of equations for a given design.) Clusters will certainly enable more of this type of computing and this is were me must tread carefully.


Before we continue, I want to borrow one more thing from the Times article. Futurist Ray Kurzweil proposes this sneaky parlor trick. Given what we know about the progression of computer technology, consider the following statement:

But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. … Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Sounds a little like science fiction and I believe we already in this era. For example, would modern day banking, health-care, communication, to name a few, be possible without computers. If we turned off the banking computers mayhem would surely result. We already trust networks of machines that interact in complex ways. At the personal level, my Linux laptop manages the complexity required to view a web site, read email, and write an article, all with simple movement of my fingers.

I recall some of my first interactions with computers was writing BASIC programs. Initially, I was impressed with the number of digits the computer produced when I asked it to divide two numbers. This time period was just before pocket calculators, by the way. I trusted the result because mathematically it was right and had lots of numbers, but wondered why it stopped were it did. As I progressed in my education, I found myself using computers as tool to analyze real data. For instance, I might have calculated a standard deviation or do a least squares fit. It was pointed out to me that the amount of precision in my measurements limited the amount of precision in my calculations. This issue as we all learned (or should have learned) is called significant digits. All those extra digits were not just superfluous, they were wrong. That is, wrong in the sense that knowing this level of precision was not possible. This experience was my first encounter with what I called hazy computing. And, many of my early computer programs were just plain hazy in both a code and results kind of way.

Over the years, I have categorized hazy computing as that which pushes computers to produce results that have little or no basis for acceptance i.e. the results are mostly worthless and cannot be tested. I recall many a data set smoothed into a curve that fit the theory or results reported to ten significant digits. In the case of the antenna mentioned above, the result could at least be tested. The result may have been hazy but it worked. In the case of Wall Street financial instruments, many of the results were beyond the ken of the quants that created them — more like fog. And, they could not be tested before they were used. Casual use of anything you don’t understand is always dangerous.

As you, the HPC pioneer, continue to put more and more processing power into the hands of end users, it is important to step back and look at the big picture. Understanding the question is just as important as the result — which we all know is 42 in any case. In my opinion, hazy computing is similar to the GIGO (garbage in garbage out) mantra. And, each year the amount of available GIGO computing power continues to grow at a staggering pace. Today $2500 can buy you enough computing power to put you in the top 100 fastest systems of ten years ago (running HPL of course). The temptation will be to naively apply this cheap horsepower to all types of problems some of which will create nonsensical results with no basis for understanding. As the HPC mavens enabling the future, I invite you keep this in mind and be a skeptic.

And now, the parlor trick. The author of the above quote that seems to get everyone thinking was none other than Ted Kaczynski, a.k.a the Unabomber. Though crazy, his argument seems to hold some water. In case you don’t know, Kaczynski was responsible for planting/sending 16 bombs which injured 23 people and killed three. One of his targets who survived was none other than David Gelernter a developer of the Linda parallel computing language. The parallel version of Gaussian was built on top of Linda. It seems Kaczynski, did not really understand his own writing and instead chose to apply a hazy nonsensical solution to a future he so eloquently described.

Comments on "Hazy Computing"

matador

Almost sounds like iRobot, where the computer decides it knows best how humans should live.

Reply
dmesg

I read the “manifesto” when they published it in the Washington Post. Unfortunately the man took a major wrong turn with his feet. He could have been a voice for putting the brakes on and thinking before leaping.

I have always said “just because we can doesn’t mean we should.” Before cell phone and wireless I was saying these two are in that category of “shouldn’t do.” I’m sure history will bear me out, the brain tumors are starting to proliferate.

Worse is the telcos knew the phones were dangerous, but the way “regulation” works is if big business wants it, it’s “a-ok” in whatever regulatory realm. Period. There is no public safety at work in any government, it’s all illusion, and designed to eliminate the competition for big business.

BTW I tried searching for the “ugly antenna” story. Got a link? I couldn’t find anything other than technical papers.

Reply
sddutky

Kaczynski goes on to say:

“On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite – just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals”

See Bill Joy’s take

Reply
driordan

“Are we dependent upon results or processes we do not understand?”

. . . and if we are, or will be, what are we supposed to do about it?

There is a theory I’ve read that suggests that civilizations collapse when their complexity reaches a sort of critical mass; when the rules of the society become more of a burden than a boon. Lets assume for a moment that this theory is accurate. We may be in, or entering an age wherein such complexity can be handled by artificial agents. Do we check them and succumb to the burden of societies rules, ending our civilization? Or do we submit ourselves to them and extend the life of our civilization, perhaps until the complexity reaches a point where the machines are overburdened?

Reply

Worse is the telcos knew the phones were dangerous, but the way “regulation” works is if big business wants it, it’s “a-ok” in whatever regulatory realm. Period. There is no public safety at work in any government Cueb Answers, it’s all illusion, and designed to eliminate the competition for big business.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>