Home | | Multi - Core Architectures and Programming | Using OpenMP to Parallelize Loops

Chapter: Multicore Application Programming For Windows, Linux, and Oracle Solaris : Using Automatic Parallelization and OpenMP

Using OpenMP to Parallelize Loops

OpenMP places some restrictions on the types of loops that can be parallelized. The runtime library needs to be able to determine the start points and end points for the work assigned to each thread.

Using OpenMP to Parallelize Loops

 

OpenMP places some restrictions on the types of loops that can be parallelized. The runtime library needs to be able to determine the start points and end points for the work assigned to each thread. Consequently, the following constraints are needed:

 

n The loop has to be a for loop of this form:

 

for (init expression; test expression; increment expression)

 

n   The loop variable needs to be of one of the following types: a signed or unsigned integer, a C pointer, or a C++ random access iterator.

 

n   The loop variable needs to be initialized to one end of the range.

 

n   The variable needs to be incremented (or decremented) by a loop invariant increment.

 

n   The test expression needs to be one of >, >=, <, or <=. The comparison needs to be with a loop invariant value.

 

Under these conditions, it is possible for the runtime to take the loop and partition the iteration ranges to the threads completing the work. Loops that do not adhere to these specifications will need to be restructured before they can be parallelized using an OpenMP parallel for construct.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Multicore Application Programming For Windows, Linux, and Oracle Solaris : Using Automatic Parallelization and OpenMP : Using OpenMP to Parallelize Loops |


Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.