Scientific Discovery through Advanced Computing

The fastest computers in the United States have their work cut out.

This column often focuses on the nitty-gritty details of
building and operating Linux clusters, or concentrates on parallel
programming languages and techniques. This month, though,
let’s take a step back to get a broader view of the
computational science applications being designed and run on the
world’s largest, fastest, and most-powerful supercomputers.
The recent announcement of new computational science projects from
the U.S. Department of Energy offers a flavor of the kinds of”
grand challenge” research of national and international importance
being performed on these great machines.

Bling!

On September 7th, the U.S. Department of Energy (DOE) announced
$60 million in new research projects in its Scientific Discovery
through Advanced Computing (SciDAC) program. Created to bring
together the nation’s top researchers to develop new
computational methods, the SciDAC Program is aimed at tackling some
of the most challenging scientific problems of our time, including
accelerating the design of new materials, developing future energy
sources, studying global climate change, improving environmental
cleanup methods, and understanding and modeling physics from the
quantum to the astronomical scale.

The SciDAC program was launched in 2001 with the goal of
developing scientific applications to effectively utilize the
terascale supercomputers (machines capable of performing trillions
of calculations per second) becoming available at that time. Widely
hailed as a success, SciDAC-1 projects helped scientists make
important discoveries using high performance computing (HPC) in
many scientific areas.

*Astrophysicists for the first
time created fully resolved simulations of the turbulent nuclear
combustion in Type 1a supernovae, improving our understanding of
the nature of the universe.

*Climate models were developed
and improved, allowing climate change scientists in the U.S. to
make the largest contribution of model results to the world’s
leading body on climate change science, the Intergovernmental Panel
on Climate Change (IPCC).

*Scientists created the first
laboratory-scale, 3-dimensional flame simulation, providing a
better understanding of the process of combustion, which provides
80 percent of the energy used in the United States.

*Fusion energy researchers
worked together with mathematicians and computer scientists to
simulate heat loss due to plasma turbulence in larger fusion
reactors planned for the future, including the ITER machine.

*Elementary particle
researchers simulated improvements for existing particle
accelerators and cost-effective designs for future accelerators to
help understand the most basic building blocks of matter.

*Physicists studying the
Standard Model of particle interactions were able to model the full
spectrum of hadron particles at a new level of accuracy, leading to
an improved understanding of the fundamental laws of physics.

In most cases, these original SciDAC projects were performed by
often-large, multi-disciplinary teams of domain scientists,
mathematicians, and computer scientists from national laboratories
and universities all across the country. In general, the SciDAC
Program supports the creation of new generations of scientific
simulation codes, mathematical and computing systems software to
enable effective and efficient use of terascale computers, and a
distributed software infrastructure to enable scientists to manage,
disseminate, and analyze large datasets from simulations and
experiments.

Following on the heels of the SciDAC-1 successes, the thirty new
SciDAC-2 projects are pushing scientific discovery to the
petascale. For this purpose, petascale computing refers both to
petaflops (a million-billion calculations per second) and petabytes
(a million-billion bytes of data). This level of computational
power will allow scientists to investigate problems at an
unprecedented level of detail and perform computational experiments
unimaginable only a few years ago. However, effectively using
petascale computing systems for scientific research poses a variety
of challenges, which the SciDAC-2 teams will have to tackle.

In announcing the awards, Raymond Orbach, DOE Under Secretary
for Science, stated,” Advanced computing is a critical element of
President Bush’s American Competitiveness Initiative and
these projects represent an important path to scientific
discovery.” He went on to say,” We anticipate that they will
develop and improve software for simulating scientific problems and
help reduce the time-to-market for new technologies.”

SciDAC Centers and Institutes

SciDAC-2 has established nine Centers for Enabling Technologies
to focus on specific challenges in petascale computing. These
Centers are charged with developing performance-portable
algorithms, methods, and libraries; creating program development
environments and tools; developing operating system and runtime
software and tools; building and deploying visualization and data
management systems; and delivering distributed data management and
computing tools. Sharing approximately $24.3 million in awards
annually are one project supporting visualization, three in the
area of applied mathematics, and five supporting various computer
science aspects of petascale computing.

To increase its presence in the larger academic community and
complement the work of the Centers, SciDAC-2 has created four
university-led Institutes with a total of thirteen university
partners. Sharing $8.2 million annually, these select projects
focus on petascale data storage, performance engineering research,
ultrascale visualization, and combinatorial scientific computing
and petascale simulations. Through hands-on workshops and
tutorials, the Institutes will help researchers prepare
applications to take advantage of petascale computing systems and
help foster the next generation of computational scientists.

Science Applications

Seventeen science application projects will receive
approximately $26.1 million in annual awards to study problems
ranging from quarks to genomes to global climate to astrophysics.
Added for SciDAC-2 are two new science application areas:
Computational Biology and Reactive Groundwater Transport. These new
efforts join projects in Physics, Climate Change, Fusion Energy,
and Materials and Chemistry. In addition, the National Science
Foundation (NSF) and DOE’s National Nuclear Security
Administration (NNSA) are partnering for the first time with SciDAC
to provide partial support toward many of these efforts.

The objectives of the Computational Biology area are to develop
new methods for modeling complex biological systems, including
molecular complexes, metabolic and signaling pathways, individual
cells, and interacting organisms and ecosystems. These biological
systems act on time scales ranging from microseconds to thousands
of years, and the simulations must couple to huge biological and
genomic databases. Two projects were selected in this area: one for
developing algorithms and engineering for gene function annotation,
led by Steven Brenner at Lawrence Berkeley National Laboratory
(LBNL), and another for developing a model of metabolism linked to
H$_2$[ Ed: That’s a capital H with the subscript 2]
production in green algae, led by Michael Seibert at the National
Renewable Energy Laboratory (NREL).

Two projects were initiated in the new Groundwater Research
area. These projects will simulate subsurface flow and transport of
hazardous materials or other contaminants carried by groundwater,
yielding benefits to the environmental cleanup efforts at DOE
facilities and to the monitoring of groundwater around existing and
future radionuclide waste storage and disposal sites. The first of
these projects, led by Peter Lichtner at Los Alamos National
Laboratory (LANL), will predict the movement of subsurface
contaminants using massively parallel, multi-scale, multi-phase,
and multi-component reactive flow simulation models. The second
project, led by Timothy Scheibe at Pacific Northwest National
Laboratory (PNNL), will attempt to integrate bio-geochemical models
across multiple time and space scales into subsurface
simulations.

In Physics, there are four new projects designed to study
fundamental forces and elementary particles to understand the
nature of matter, energy, space, and time. The Computational
Astrophysics Consortium, a project led by Stan Woosley at
University of California Santa Cruz and co-sponsored by NNSA, will
study the causes and effects of supernovae, gamma-ray bursts, and
nucleosynthesis. This project includes a Science Application
Partnership (SAP) for adaptive algorithms in computational
astrophysics headed by John Bell at LBNL. Robert Sugar at
University of California Santa Barbara will head a project to
provide a national computational infrastructure for lattice quantum
chromodynamics, the theory of quarks and gluons formulated on a
space-time lattice.

Simulations of turbulent flows hit with strong shockwaves will
be studied in a project co-sponsored by NNSA and headed up by
Sanjiva Lele at Stanford University. Miron Livny at the University
of Wisconsin leads a project to stimulate new discoveries by
providing particle physicists with effective and dependable access
to the Open Science Grid (OSG), a national distributed
computational facility that houses and manages petabyte-sized
datasets from particle accelerators and detectors at national and
international accelerator facilities.

In Climate Change, three new projects are designed to help
understand how Earth’s climate responds to physical,
chemical, and biological changes produced by global alterations of
the atmosphere, ocean, and land. These projects will all serve to
increase both the accuracy and throughput of computer model-based
predications of future climate responses to increased
concentrations of greenhouse gases in the atmosphere.

The largest of these projects, led by John Drake at Oak Ridge
National Laboratory (ORNL), consists of a consortium of DOE
national laboratories, universities, and the National Center for
Atmospheric Research (NCAR). Moving beyond traditional, coupled
atmosphere-ocean general circulation models, this project will
create a first generation Earth system model that incorporates
physical, chemical, and biogeochemical processes into simulations
of the climate system. This project includes two SAPs: one on
aerosol dynamics headed by Robert McGraw at Brookhaven National
Laboratory (BNL), and one on performance (software) engineering
headed by Pat Worley at ORNL.

A second Climate Change project will design and test a global
cloud-resolving model. Led by David Randall at Colorado State
University, this project will build upon methods developed under
SciDAC-1 to improve atmospheric simulations, using a
non-hydrostatic dynamical core and parameterized cloud microphysics
at very high spatial resolutions. The third project will develop a
uniform set of software tools suitable for the evaluation of
high-end climate models. Led by Rao Kotamarthi at Argonne National
Laboratory (ANL), this project will provide tools to support the
assimilation of observational datasets into climate
simulations.

With a goal of developing a clean and renewable source of power,
the Fusion Energy projects will work to understand and predict the
behavior of plasmas in fusion reactors through integrated
simulations across multiple scales. The new project in this area,
led by J. R. Cary at Tech-X Corporation, will provide full-scale
reactor modeling for the U.S. fusion program and ITER. Two Fusion
Energy projects are continuing. The first, headed by Don Batchelor
at ORNL, studies the simulation of wave interactions with
magnetohydrodynamics. The second, led by C. S. Change at New York
University, is developing a new, integrated, predictive plasma edge
simulation package applicable to next generation fusion
experiments.

Five new and four continuing projects fall under the Materials
and Chemistry application area. This research pursues the
understanding of reactions and interactions that determine material
properties through molecular modeling and simulation of extended
structures such as clusters or surfaces. A new project, headed by
George Fann at ORNL, will develop and implement advanced
mathematical methods and software for simulating the quantum
electronic structure of atoms, molecules, and nanoscale systems
using petascale computing. A project led by Mark Gordon at Ames
Laboratory will apply the Common Component Architecture (CCA) to
improve the efficiency and availability of computational chemistry
software.

The three remaining new projects are co-sponsored by NNSA. The
first of these, led by Giulia Galli at UC-Davis, will perform
quantum simulations of materials and nanostructures from first
principles. A project headed by Mark Jarrell at the University of
Cincinnati will develop massively parallel, multi-scale methods for
studying strongly correlated materials, such as magnets and
superconductors. The third project will develop a petascale
simulation framework for stress corrosion cracking. This project is
led by Priya Vashishta at the University of Southern
California.

The four continuing Materials and Chemistry projects consist
of:

1.Developing scalable methods
for electronic excitation and optical responses of nanostructures,
headed by Martin Head-Gordon at LBNL;

2.Modeling optical interactions
and transport in tailored nanosystem architectures, led by Stephen
Gray at ANL;

3.Integrated, multi-scale
modeling of molecular computing devices, headed by Peter Cummings
at ORNLl

4.Predicting the electronic
properties of 3D, million-atom semiconductor nanostructure
architectures, led by Alex Zunger at NREL.

Figure One:
The Cray XT3
supercomputing system at Oak Ridge National Laboratory was recently
upgraded to 54 teraflops (54 trillion calculations per second),
placing it among the most powerful, open, scientific computing
systems in the world. (Photo courtesy of Oak Ridge National
Laboratory.)

class="story_image"> "http://www.linux-mag.com/images/2006-11/extreme/cray.jpg" class=
"story_image">

How is That for Extreme?

Sorry if you’re eyes glazed over. As you can see, these
are very large and complex scientific problems, so it’s easy
to understand why such simulations require high-end supercomputers.
Machines like the newly upgraded Cray XT3 at ORNL, pictured in
Figure One, now operating at 54 teraflops
(54 trillion calculations per second), will provide the
computational horsepower required tackle these challenging
scientific problems. This machine and other high-end computational
resources will be discussed in future columns.

For more information about the SciDAC Program and SciDAC-2
Projects, visit the program’s Web site at "http://www.scidac.org/" class=
"story_link">http://www.scidac.org/
.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62