Smart Clusters: Intelligence Is As Intelligence Does

Is there a place for Artificial Intelligence on your cluster?

The following topic scares me for two reasons. First, maybe I read too many sci-fi novels about Artificial Intelligence (AI) going wrong (or right, we’ll get to that in bit). Second, most HPC people are pragmatic individuals who deal with numbers and results that have a firm mathematical underpinning. Talking about AI as an HPC application is not quite a mainstream discussion.

This week I will discuss some concerns and hopefully convince myself that Skynet is only in the movies. I’ll also step out on limb and discuss what may become the a big application area for clusters. As a boy I was enamored by HAL 9000 in the movie 2001: A Space Odyssey and now have a more pragmatic attitude that has been tempered over the years due to the AI hype of the past. We were supposed to have “it” all figured out by now.

One of the missing pieces of AI has always been a good model or definition of intelligence. AI covers many areas of “intelligence”, such as machine learning, optimization, natural language, planning, object recognition, etc. These areas are often called “Weak AI,” or as I call it “helpful intelligence,” exists in many applications used today.

Take for example, Vim the enhanced Vi editor. Vim seems to “know” the difference between a bash script and C program and automatically does highlighting based on the syntax. It even helps with things like unclosed quotes, comments, and braces. If I were to show this to the original authors of Vi, they would probably say something like, “You made Vi smarter.” Some might say Vim has AI that allows it to understand “context.” Of course, Vim could be fooled, but the point is once AI works it seems to get forgotten or taken for granted. A compiler is an other example. Today’s compiles try to optimize code by figuring “what this program is trying doing.” As a user, I don’t care how you do it, just make my code run faster. Optical character recognition is another example. Others may scoff and say, “it is just some helpful code,” but I invite a close look at how much of computing is really parsing (understanding input) so that the computation is correct. If we can intelligently filter out the garbage going in, then there is less going out.

There are other examples of somewhat “stronger” AI behind many things we touch today, In finance, there is automated trading (not sure how intelligent that is, however) and things like fraud detection. There are diagnostic systems in medicine and even toys. One famous example is the thankfully short lived and annoying Furby. Closer to HPC, Inductive Logic Programming is a very effective way to predict gene function.

Recently, there was an article in the New York times about the NELL project and its attempt to create a computer system that learns over time to read and understand the web. Since January 2010, NELL (Never-Ending Language Learner) has been running continuously. NELL first attempts to “read,” or extract facts from text found in hundreds of millions of web pages, then it attempts to improve its reading competence based on what it has learned so that tomorrow it can extract more accurate facts from the web. As of October 2010, NELL has acquired a knowledge base of nearly 440,000 beliefs. The project uses “a supercomputing cluster” provided by Yahoo. Indeed, such a project would not be feasible without a cluster.

There are other efforts to collect and build knowledge bases such as OpenCyc and tools for building general cognitive architectures like Soar and OpenCog, but NELL an AI-HPC learning from the web. That might be a scary prospect in and of itself. This is going to become more common and clusters will make this happen. I am sure there are teams of researchers at Google that want to learn more from the web. They have the data and tools to search and analyze large portions of the web already in place. At some point, when your website gets crawled by Google, Yahoo, Microsoft, and all the other search engines, they may be doing a bit more than web indexing.

Now the scary part. First, the web is not the real world. I would be very interested what an AI actually learns from the web. Second, there are two kinds of learning used in AI. The first is directed learning as is used in NELL to help correct mistakes. This method is how children learn. In observing children I often see a “mimic, mistake, learn” cycle that is directed by an adult. Any time you have directed learning there is a bias. A bias is not bad. For instance, I may want to teach my daughter that eating cupcakes should be avoided at all costs, or eaten in moderation, or consumed all the time depending upon my nutritional beliefs. (Okay, bad example, I like cupcakes.) Thus, directed AI’s will have internal belief systems — just like our children. In the extreme, think about two separate AI’s created by the Democratic and Republican parties in the US. The question then becomes “What AI are you going to believe?” and such issues as trust come in to play.

Some think a solution is to use undirected learning. In this case there is no intervention. Such systems have a difficult time with certain nuances of language (“Men’s evil manners live in brass; their virtues we write in water”, Shakespeare) and can create some bizarre artificial realities. Undirected learning may also not turn out the way we had planned. We all know the Skynet scenario, which is really another take on one of my favorite AI movies, Colossus: The Forbin Project. Of course they did not have James T. Kirk who has a talent for arguing computers to death.

As we create AI’s we will have to wrestle with ideas like truth, reality, trust, verifiability, and ethics. Such ideas are certainly not new and epistemology and philosophy may become less academic and more practical as cluster AIs gets stronger and more powerful. As the classic sci-fi writer Issac Asimov describes in the novel I Robot obvious and good intentioned rules may have unintended consequences.

The ultimate goal of AI is what is known as a “Strong AI” where the intelligence of a machine matches or exceeds human intelligence. (Based on some people I know I would say that has happened already, but I digress.) The extreme of Hard AI is of course part of Kurzweil’s Singularity. Though somewhat controversial, Kurzweil looks to the eventual intersection and acceleration of genetics, nanotechnology, and robotics (including artificial intelligence).

Even with the largest clusters trying to grok our reality we may be decades away from real progress, if at all. There are those that argue Strong AI is not possible and in the end all we really end up with are more annoying, but smarter, Furby’s. One thing I have learned, I’m not smart enough to figure it out, maybe your next cluster will.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62