Using OpenMP, Part 3

This is the third and final column in a series on shared memory parallelization using OpenMP. Often used to improve performance of scientific models on symmetric multi-processor (SMP) machines or SMP nodes in a Linux cluster, OpenMP consists of a portable set of compiler directives, library calls, and environment variables. It's supported by a wide range of FORTRAN and C/C++ compilers for Linux and commercial supercomputers.

This is the third and final column in a series on shared memory parallelization using OpenMP. Often used to improve performance of scientific models on symmetric multi-processor (SMP) machines or SMP nodes in a Linux cluster, OpenMP consists of a portable set of compiler directives, library calls, and environment variables. It’s supported by a wide range of FORTRAN and C/C++ compilers for Linux and commercial supercomputers.

OpenMP is based on the fork and join model of execution in which a team of threads is spawned (or forked) at the beginning of a concurrent section of code (called a parallel region) and subsequently killed (or joined) at the end of the parallel region. OpenMP is portable across platforms and is intended for use in programs that execute correctly either sequentially (that is, when compiled without OpenMP enabled) or in parallel (with OpenMP enabled).

An introduction to the concepts and syntax of OpenMP directives was presented in January’s column (available online at http://www.linux-mag.com/2004-01/extreme_01.html). February’s column (available online at http://www.linux-mag.com/2004-02/extreme_01.html) covered more directives and all of the library functions and environment variables. Both previous columns included example C code, demonstrating many of the features of OpenMP. This month’s column presents the remaining directives and OpenMP’s data environment clauses.

Reviewing Constructs

OpenMP directives take the form #pragma omp directive-name [clause[[,] clause]…] and sit just above the structured code blocks that they affect. A directive, along with all the clauses that modify it and the subsequent structured block of code, constitute what is called a construct. We’ve already seen how to use the parallel construct. It’s the fundamental construct that starts parallel execution.

The work-sharing constructs — for, sections, and single — distribute the execution of associated program statements among the thread team members that encounter them. Combined parallel work-sharing constructs are shortcuts for parallel regions containing only one work-sharing construct. The combined constructs are parallel for (used in January’s example program) and parallel sections.

The Last of the Directives

The sections and parallel sections directives are used to declare blocks of code that can be executed concurrently. While the for and parallel for directives spread loop iterations across thread team members, sections and parallel sections spread non-iterative blocks of code across threads in a team. Each section or structured block is executed once by one of the threads.

For example, some code may call a series of subroutines to compute physics processes on each surface of a cube. Since processes on each face can be computed independently and each has its own subroutine, the sections or parallel sections directives can be used to tell the compiler that computations for each section of code may completely overlap. Such a construct might look like this:


void do_physics()
{
#pragma omp parallel sections
{
#pragma omp section
top_physics();
#pragma omp section
bottom_physics();
#pragma omp section
left_physics();
#pragma omp section
right_physics();
#pragma omp section
front_physics();
#pragma omp section
rear_physics();
}
}

Here, we used the combined parallel sections directive instead of having separate parallel and sections directives. Within the structured block of the parallel sections construct, each statement that may be concurrently executed has its own section directive. As a result, the program is free to completely overlap the computation of all these subroutines by distributing them among threads in the team.

When the code snippet above is compiled (with sufficiently time-consuming subroutines), it should be about twice as fast when using two threads (with OpenMP enabled) than when compiled and run without OpenMP. In the example below, the code is first compiled and run with OpenMP disabled. Then the code is compiled with OpenMP support (enabled by the -mp flag on the compile line when using the Portland Group compiler) and run with two threads on a dual-processor Pentium III.


[node01]$ pgcc -O -o sections sections.c
[node01]$ time ./sections
real 0m41.205s
user 0m41.201s
sys 0m0.002s
node01]$ pgcc -mp -O -o sections sections.c
node01]$ OMP_NUM_THREADS=2 time ./sections
41.19user 0.15system 0:20.70elapsed 199%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (134major+14minor)
pagefaults 0swaps

As you can see, the serial version ran in 41.2 seconds. The OpenMP parallel version (using two threads) still consumed 41.2 seconds of user time, but real elapsed time was only 20.7 seconds. Therefore, using only very simple compiler directives, we were able to use both processors on an SMP machine to cut wallclock time in half.

The single directive in a parallel region identifies a block of code to be executed by only one thread in the team. The thread that executes this code block need not be the master thread — the block is usually executed by the first thread that encounters it.

The following code snippet demonstrates this feature.


#pragma omp parallel private(tid)
{
tid = omp_get_thread_num();
#pragma omp single
printf(“%d: Starting process_block1\n”,
tid);
process_block1();
#pragma omp single nowait
printf(“%d: Starting process_block2\n”,
tid);
process_block2();
#pragma omp single
printf(“%d: All done\n”, tid);
}

The code contains a parallel region for which the variable tid is private to each thread. Within the parallel region, the single directive is contained above each of the printf statements so that the messages are printed only once no matter how many threads are executing statements in the parallel region.

The thread id, obtained from the call to omp_ get_thread_num() and stored in the private variable tid, is printed by whichever thread executes each printf statement. When compiled and run, you can see that thread one executed the first and third print statements, while thread zero (the master thread) executed the one in the middle.


[node01]$ pgcc -mp -O -o single single.c
[node01]$ OMP_NUM_THREADS=2 ./single
1: Starting process_block1
0: Starting process_block2
1: All done

There is an implied barrier at the end of a single construct. As a result, after one thread executes the print statement, all other threads must “catch up” to the barrier point before they all simultaneously execute the next statements. The nowait clause can be used to eliminate the implied barrier.

In the example code above, all threads begin executing process_block1() simultaneously, because of the single construct above it.

However, threads may begin executing process_ block2() at slightly different times because the nowait clause is specified as part of the single construct above process_ block2().

The master directive is similar to the single directive, although it requires that only the master thread execute the adjoining code block.

The critical directive is used to identify a section of code within a parallel region that should be executed by only one thread at a time. This directive should be used with caution, because too many criticals can result in frequent synchronization, thus slowing down processing. While critical constructs could be used for updating counters or performing similar reductions within parallel loops on global shared variables, the reduction clause is often better suited to that task.

The critical directive is often useful for queuing applications in which calls are made to obtain new requests from a shared queue. A critical directive above a function call that returns a request identifier prevents two or more threads from requesting a new identifier at the same time, preventing a race condition.

For example, in the following code snippet, the critical directive sits above the call to get_next_request():


#pragma omp parallel shared(request_queue)
private(request_id,request_status)
for (;;) {
#pragma omp critical (get_request)
request_id =
get_next_request(request_queue);
printf(“Processing request %d\n”,
request_id);
request_status = process_request
(request_id);
update_request_status(request_id,
request_status); }

As a result, this function is called by only one thread at a time, ensuring that each receives a unique request identifier. Notice that the critical construct is contained within a parallel construct that identifies request_queue as a shared variable and request_id and request_status as variables private to each thread.

The barrier directive provides a means for synchronizing all threads in a team. When encountered in the program, each thread in the team waits for all other team members to reach the same, specified point before collectively starting execution of the subsequent statements in parallel. The barrier directive is often useful for ensuring that all threads have completed some phase of work prior to exchanging results as in the following code example.


#pragma omp parallel
{
work_phase1();
#pragma omp barrier
exchange_results();
work_phase2();
}

Here work_phase1() is executed simultaneously by all threads in the team. As each thread returns from the routine, it waits for all threads to complete work_phase1() prior to calling exchange_results() and executing work_phase2(). In general, barriers should be avoided except where necessary to preserve the integrity of the data environment. Spending valuable time synchronizing threads that could operate completely independently is not a good use of computer time.

The atomic directive ensures that a memory location is updated atomically instead of allowing multiple threads to write to the same location at once. Only certain mathematical expressions may be used in the atomic construct.

For example, the following piece of code contains a parallel for construct with an atomic directive within the loop to protect against simultaneous updates of an element of the ts array that is accessed through an index array.


#pragma omp parallel for shared(ts, index)
for (i = 0; i < SIZE; i++) {
#pragma omp atomic
ts[index[i]] += compute1(i); }

The advantage of using the atomic directive in this case is that multiple elements of ts can be simultaneously updated. If a critical directive had been used instead, all updates to ts would be serialized, resulting in poor performance.

The flush directive is used to synchronize shared objects in memory across a team of threads. A list of variables that must be synchronized can be provided with the flush directive. Alternatively, flush without a variable list synchronizes all shared objects (and probably incurs more overhead).

The ordered directive identifies a block of code that’s executed in the order in which iterations would if they were executed sequentially. An ordered directive must be within the extent of a for or parallel for construct. Moreover, the for or parallel for must also specify an ordered clause.

In the following example, the compute1() routine is called within a parallel for construct containing an ordered clause. The print statement in compute1() has an ordered directive above it so that the output is generated in the expected sequential order.


void compute1(int i)
{
int tid;

tid = omp_get_thread_num();
#pragma omp ordered
printf(“%d: compute1 called for
iteration %d\n”, tid, k);

/* lots of work removed from here */
}

int main(int argc, char **argv)
{
int i;

#pragma omp parallel for
ordered schedule(dynamic)
for (i = 0; i < 10; i++)
compute1(i);
exit(0);
}

The parallel for directive also has a schedule clause that specifies dynamic adjustment of threads. This clause causes each iteration to be assigned (in order) to the next available thread.

In the output below, iteration 0 is assigned to the master thread (thread 0) and iteration 1 is assigned to thread 1. Since thread 1 completes its work first, the thread becomes available and is assigned iteration 2, the very next iteration.


[node01]$ OMP_NUM_THREADS=2 time ./ordered
0: compute1 called for iteration 0
1: compute1 called for iteration 1
1: compute1 called for iteration 2
0: compute1 called for iteration 3
0: compute1 called for iteration 4
1: compute1 called for iteration 5
0: compute1 called for iteration 6
1: compute1 called for iteration 7
0: compute1 called for iteration 8
1: compute1 called for iteration 9
48.83user 0.16system 0:24.66elapsed 198%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (144major+16minor)pagefaults 0swaps

Thread Data Environment

The data environment for OpenMP threads in a team is controlled by the threadprivate directive and a variety of data sharing clauses. We’ve already used the most common of these clauses — private and shared — in examples. Table One contains a list of all OpenMP clauses, including the data sharing attribute clauses, and the directives with which they may be used.




Table One: All OpenMP clauses and the directives with which they may be used















ClauseOPENMP Directives
copyinparallel
copyprivatesingle
defaultparallel
firstprivateparallel, for, sections, single
ifparallel
lastprivatefor, sections
nowaitfor, sections, single
num_threadsparallel
orderedfor
privateparallel, for, sections, single
reduction parallel, for, sections
schedulefor
sharedparallel


The threadprivate directive is used to make various data objects, specified in a list along with the directive, private to each thread. As usual, the list is contained within parentheses and separated by commas. This amounts to creating a copy of the variable for each thread in the team. Each copy is initialized once prior to the first reference of that copy.

As with all private objects, one thread may not reference another thread’s copy of a threadprivate object. Within serial and master regions of the program, the master thread’s copy of the object is used. threadprivate objects persist outside the parallel region in which they are copied only if the dynamic thread mechanism is disabled and the number of threads doesn’t change.

The threadprivate directive must precede all references to any of the variables or objects in its list.

In the following example, a counter variable called counter is declared then followed by a threadprivate directive at the same level (not within subroutines) and prior to being referenced. In main(), a parallel loop calls bump_counter() ten times, printing out its value in each iteration.


int counter = 0;
#pragma omp threadprivate(counter)

int bump_counter()
{
counter++;
return counter;
}

int main(int argc, char **argv)
{
int i;

#pragma omp parallel for
for (i = 0; i < 10; i++) {
bump_counter();
printf(“%d: i=%d and my copy of
counter = %d\n”,
omp_get_thread_num(), i, counter);
}
exit(0);
}

When run without OpenMP (or with only one thread), a single copy of counter is bumped ten times resulting in a final value of 10. As seen below, when run with two threads, each copy of counter is bumped five times. This loop executes so quickly that all the output from thread zero appears before output from thread one.


[node01]$ OMP_NUM_THREADS=2 ./tp
0: i=0 and my copy of counter = 1
0: i=1 and my copy of counter = 2
0: i=2 and my copy of counter = 3
0: i=3 and my copy of counter = 4
0: i=4 and my copy of counter = 5
1: i=5 and my copy of counter = 1
1: i=6 and my copy of counter = 2
1: i=7 and my copy of counter = 3
1: i=8 and my copy of counter = 4
1: i=9 and my copy of counter = 5

In addition to the threadprivate directive, a number of data sharing attribute clauses may be used with other directives to control whether data objects are shared or private, as well as how they are initialized before and saved after the associated code block. If an existing variable is not specified in a sharing attribute clause or threadprivate directive when a parallel or work-sharing construct is encountered, it is shared. Static variables and heap allocated memory is also shared. However, the pointer to this memory may be either private or shared. Automatic variables declared within a parallel region are private.

Most clauses accept a comma-separated list of variables contained within parentheses. Variables can’t be specified in multiple clauses except for the firstprivate and lastprivate clauses. Not all clauses are valid for all directives. Table One provides a list of clauses and the directives with which they may be used. The combined parallel work-sharing constructs parallel for and parallel sections accept the same clauses as the for and sections constructs, respectively.

As we’ve already seen in previous examples, the private clause declares variables to be private for each thread in a team. When objects are declared private, new objects with automatic storage duration are allocated on each thread. These new private variables are used for the extent of the construct. The original objects have an indeterminate value upon entry to and exit from the construct.

The firstprivate clause has the same behavior as the private clause, except with regard to initialization of the private object. When used with a parallel construct, the firstprivate clause causes the specified variables to be initialized to the values of the original objects as they exist immediately prior to the parallel construct for the thread that encounters it. With a work-sharing construct, the initial value of new private objects is set to the value of the original object just prior to the point in time when the participating thread encountered the construct.

In a similar fashion, the lastprivate clause behaves just like private, except that the final values of the specified variables are saved to the original objects outside of the parallel or work-sharing constructs upon exit of the construct. Variables not assigned a value in the last iteration of a for or parallel for construct or by the last section of a sections or parallel sections construct have indeterminate values upon exit of the construct.

The shared clause makes specified objects shared among all threads in a team. It is usually not necessary to specify objects created outside a construct as shared since this is the default behavior. However, the default clause, which requires either (shared) or (none) as a parameter, may be used to change this behavior. Specifying default(none) requires that each variable be listed explicitly in a data-sharing attribute clause, unless it’s declared within the parallel construct.

The reduction clause performs a reduction on the scalar variables that appear in the variable list along with some operator. We used this clause in previous examples to sum up scalar variables across threads. Like the private clause, the reduction clause tells the compiler to create a private copy of the specified variables for each thread. Then at the end of the region for which the clause was specified, the original object is updated to reflect the combined result from all the threads based on the operator specified in the reduction clause.

The copyin clause provides a way to assign the same value to threadprivate variables for each thread in a team. The value of each variable in a copyin clause is copied from the master thread to the private copies on every other thread at the beginning of a parallel region.

Similarly, the copyprivate clause, which may only appear with the single directive, may be used to broadcast to all threads values of variables from the thread which executed the single construct. This updating of private variables on each thread occurs after the execution of the code within the single construct and before any threads have left the implied barrier at the end of the construct.

These data-sharing attribute clauses provide a powerful mechanism for manipulating the data environment for threads. Using the clauses, you can avoid writing your own shared memory data handling software. With a small number of fairly simple directives and powerful clauses, OpenMP can often be a very easy way to take advantage of shared memory systems for modeling and data processing. When combined with MPI for distributed memory parallelism, it can further improve performance and resource utilization on SMP clusters.

We didn’t discuss nesting of OpenMP directives, and some details of directive and clause restrictions have been glossed over. So when you are ready to add OpenMP to your own code, be sure to read the specification documents on the OpenMP web site at http://www.openmp.org.



Forrest Hoffman is a computer modeling and simulation researcher at Oak Ridge National Laboratory. He can be reached at
forrest@climate.ornl.gov.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62