Home | | Multi - Core Architectures and Programming | Using Parallel Sections to Perform Independent Work

Chapter: Multicore Application Programming For Windows, Linux, and Oracle Solaris : Using Automatic Parallelization and OpenMP

Using Parallel Sections to Perform Independent Work

OpenMP parallel sections provide another way to parallelize a code into multiple inde-pendent units of work that can be assigned to different threads.

Using Parallel Sections to Perform Independent Work

 

OpenMP parallel sections provide another way to parallelize a code into multiple inde-pendent units of work that can be assigned to different threads. Parallel sections allow the developer to assign different sections of code to different threads. Consider a situation where in the process of being initialized an application needs to set up two linked lists. Listing 7.36 shows an example.

 

Listing 7.36   Using Parallel Sections to Perform Independent Work in Parallel

#include <stdlib.h>

 

typedef struct s

 

{

 

struct s* next; } S;

 

void setuplist( S *current )

 

{

 

for(int i=0; i<10000; i++)

 

{

 

current->next = (S*)malloc( sizeof(S) ); current = current->next;

}

 

current->next = NULL;

 

}

 

int main()

 

{

 

S var1, var2;

 

#pragma omp parallel sections

 

{

 

#pragma omp section

 

{

 

setuplist( &var1 );       // Set up first linked list

 

}

 

#pragma omp section

{

 

setuplist( &var2 );       // Set up second linked list

 

}

 

}

 

}

 

 

 

The parallel region is introduced using the #pragma omp parallel directive. In this example, it is combined with the sections directive to produce a single statement. This identifies the region of code as containing one or more sections of code that can be exe-cuted in parallel. Each individual section is identified using the directive #pragma omp section. It is important to notice the open and close braces that denote the range of code included in the parallel sections and also denote the code in each parallel section. In the absence of the braces, the parallel section would apply only to the following line of code.

 

All the threads wait at the end of the parallel sections region until all the work has been completed, before any of the subsequent code is executed.

 

Although parallel sections increase the range of applications that can be parallelized using OpenMP, it has the constraint that the parallelism is statically defined in the source code. This static definition of parallelism limits the degree of scaling that can be expected from the application. Parallel sections are really effective only in situations where there is a limited, static opportunity for parallelism. In most other cases, parallel tasks, which we will discuss later, may be a better solution.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Multicore Application Programming For Windows, Linux, and Oracle Solaris : Using Automatic Parallelization and OpenMP : Using Parallel Sections to Perform Independent Work |


Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.