Home | | Multi - Core Architectures and Programming | Shared-Memory Programming with Pthreads

Chapter: An Introduction to Parallel Programming : Shared-Memory Programming with Pthreads

Shared-Memory Programming with Pthreads

Recall that from a programmer’s point of view a shared-memory system is one in which all the cores can access all the memory locations (see Figure 4.1).

Chapter 4

Shared-Memory Programming with Pthreads

Recall that from a programmer’s point of view a shared-memory system is one in which all the cores can access all the memory locations (see Figure 4.1). Thus, an obvious approach to the problem of coordinating the work of the cores is to specify that certain memory locations are “shared.” This is a very natural approach to parallel programming. Indeed, we might well wonder why all parallel programs don’t use this shared-memory approach. However, we’ll see in this chapter that there are problems in programming shared-memory systems, problems that are often different from the problems encountered in distributed-memory programming.

 

For example, in Chapter 2 we saw that if different cores attempt to update a single shared-memory location, then the contents of the shared location can be unpre-dictable. The code that updates the shared location is an example of a critical section. We’ll see some other examples of critical sections, and we’ll learn several methods for controlling access to a critical section.

 

We’ll also learn about other issues and techniques in shared-memory program-ming. In shared-memory programming, an instance of a program running on a processor is usually called a thread (unlike MPI, where it’s called a process). We’ll learn how to synchronize threads so that each thread will wait to execute a block of statements until another thread has completed some work. We’ll learn how to put a thread “to sleep” until a condition has occurred. We’ll see that there are some circumstances in which it may at first seem that a critical section must be quite large. However, we’ll also see that there are tools that sometimes allow us to “fine-tune” access to these large blocks of code so that more of the program can truly be executed in parallel. We’ll see that the use of cache memories can actually cause a shared-memory program to run more slowly. Finally, we’ll see that functions that “maintain state” between successive calls can cause inconsistent or even incorrect results.

 

In this chapter we’ll be using POSIX R threads for most of our shared-memory functions. In the next chapter we’ll look at an alternative approach to shared-memory programming called OpenMP.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
An Introduction to Parallel Programming : Shared-Memory Programming with Pthreads : Shared-Memory Programming with Pthreads |


Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.