Happy 2019

This week we jump ten years into the future to see how it all plays out.

Happy New Year. This year, 2019 we celebrate ten years since that pivotal year in computer programming. It was the year that computing changed forever. Although the changes were suggested for many years from academic circles, the ideas never caught on. It seemed so unnecessary and too academic. The change did come, however, and 2009 was the year when the shift started.

The idea was simple, yet no one who relied on computers for their livelihood ever asked the question. The biggest lesson came from the financial sector. The “quants” as the they are called developed models to evaluate risk. The models were wrong. Worse still, they were really wishful thinking — answers to ill formed problems. Furthermore, the interplay between these trading systems was not really understood by anyone. No one could predict what would happen. Other areas were affected as well. Reliance on the numbers generated by computers was increasing everyday. In the HPC sector more and more people were able to use powerful computational tools right at their desktops. The problem was, programming these multi-core parallel systems was difficult and although answers were produced, how the results were obtained was not very well understood.

Then the regulators and lawyers got involved. If you were going to base billion dollar trades on computer analysis, then someone must be able to answer a simple question? Why? Or more specifically, Why is this answer to be trusted?

This overly obvious idea seems to run counter to how computing was done back in the day. The program is correct because I told it what to do. Therefore, the answer is correct. Of course, more that a few times absolute assurances of correctness were not as absolute or correct than we would have liked. In addition, shinning some light on how an answer was generated often provided a but more trust in the whole process.

At first there were many protests. How can we possibly verify code. It is an computationally expensive and requires one to express a problem in a higher level language. In addition, certain problems are impossible to verify due to the whole halting problem thing. While this is true there had been some success with first order logic and tools like Prover9 or Vampire.

Another factor that played out was the over abundance of cores. A standard desktop now had at least 32 hybrid cores which was more than adequate power to help verify codes. If you added the GP-GPU processors in the hybrid cores you had at least 1000 compute cores to throw at a problem. As I recall in 2012, a big milestone was reached. The Spreadsheet Verification Project was announced. A user could actually ask a spreadsheet how it arrived at an answer. Verification occurred in almost real time as the user created the spreadsheet. Because the specification was so high level, the project showed great utility as a way to program parallel computers — either multi-core or clusters. By the way, one interesting highlight was the student that tried to build a verified spreadsheet using a processor from 2008. As I recall he found an old quad-core (no GP-GPU) and it took him 3 weeks to complete a simple spreadsheet. How did they ever get anything done back then.

There was some concern, however. The biggest problem was “verifying the verifier.” How could one be sure that the program doing the verification was correct? This concern was valid, however, the argument was made that creating several independent quality verifiers would be better than continuing to build codes the old fashion way. That is, investing quality time in the verification tools was worth the price of creating better codes faster.

Of course we have a long way to go. Not all codes are written this way — some cannot be. There was also another effort called the Common Sense Code project started five years ago. While it did not use formal methods, it did try to use some heuristics to check for code correctness. These methods have actually helped as programming tools as well. For instance, one Common Sense Code project requires units to supplied with all measured numbers. It automatically tracks the units to make sure the result makes sense. It also watches for things like round off-error and reports answers to the correct amount of significant digits.

Needless to say, these types of approaches were not received very well by many mainstream coders. The end users loved these tools. It allowed them to get closer to their problems and further away from the underlying computer architecture. The underlying idea was to allow the programmer to express more about what they wanted to do and less about how they wanted to do it.

The only other historical side note was the whereabouts of a certain columnist for Linux Magazine. This guy kept harping on these issues until one day he just quite and took up decoupage and vacation scrap booking. No one is sure what ever happened to him. At least no one asked why?

Jumping back to 2009, if you have not already, take a look at the current HPC Smackdown. We are addressing the “Are Multi-cores really bad for HPC?” question. Take a look and see what our experts have to say and maybe even add your own pile driver. We’ll have a new issue next week.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62