Ensuring
That Code in a Parallel Region Is Executed in Order
In some cases, it may be necessary to ensure that a section of code is
executed in the same order as the serial code would execute it. Unfortunately,
such an ordering is unlikely to allow the code to get the full benefit of using
multiple threads, but it should enable some gains to be attained from
parallelization.
OpenMP supports the ordered directive, which ensures that the order of parallel execution is the
same as the serial ordering. The directive needs to be applied to the par-allel
region, and the loop also needs to be identified as an ordered loop using the ordered
clause on the parallel for
directive.
Listing 7.58 shows how the ordered directive can be
used to ensure that the loop iterations are printed in the correct order.
Listing 7.58 Using
the Ordered Directive to Ensure Code Executes in the Serial Order
#include <stdio.h> #include <omp.h>
int
main()
{
#pragma omp
parallel for ordered
for ( int i=0; i<100; i++ )
{
#pragma omp
ordered
{
printf(" Iteration %i, thread ID %i\n", i,
omp_get_thread_num() );
}
}
}
The ordered directive is most useful when applied to loops that do not use static
scheduling. With the default static scheduling used in the example, the first
thread will execute the first portion of the iterations, the second thread the
second portion, and so on. Since the ordered region needs to be executed in the serial order, the second thread ends
up waiting at the ordered code
block until the first thread has completed all of its assigned work. This means
that the work is serialized, but each serial chunk of work has been performed
by a different thread.
The ordered
directive is a useful way of exploring the impact of the scheduling on the
order in which iterations are assigned to threads. Listing 7.59 shows the code
modi-fied to use dynamic scheduling.
Listing 7.59 Using the Ordered Directive to Explore the Scheduling Directive
#include <stdio.h> #include <omp.h>
int main()
{
#pragma omp
parallel for ordered schedule( dynamic ) for
( int i=0; i<100; i++ )
{
#pragma omp
ordered
{
printf( "Iteration %i, thread ID %i\n", i,
omp_get_thread_num() );
}
}
}
Listing 7.60 shows the effect of this change in scheduling. Dynamic
scheduling causes the two threads to work with the default chunk size of a
single iteration, so the two threads alternate performing iterations.
Listing 7.60 Exploring
the Impact of Dynamic Scheduling
$ cc -O
-xopenmp ordered.c $ export
OMP_NUM_THREADS=2 $ ./a.out
Iteration 0 Thread 0
Iteration 1 Thread 1
Iteration 2 Thread 0
Iteration 3 Thread 1
...
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.