As we noted earlier, we’ll be focusing on homogeneous MIMD systems—systems in which all of the nodes have the same architecture—and our programs will be SPMD. Thus, we’ll write a single program that can use branching to have multiple different behaviors. We’ll assume the cores are identical but that they operate asynchronously. We’ll also assume that we always run at most one process or thread of our program on a single core, and we’ll often use static processes or threads. In other words, we’ll often start all of our processes or threads at more or less the same time, and when they’re done executing, we’ll terminate them at more or less the same time.
Some application programming interfaces or APIs for parallel systems define new programming languages. However, most extend existing languages, either through a library of functions—for example, functions to pass messages—or extensions to the compiler for the serial language. This latter approach will be the focus of this text. We’ll be using parallel extensions to the C language.
When we want to be explicit about compiling and running programs, we’ll use the command line of a Unix shell, the gcc compiler or some extension of it (e.g., mpicc), and we’ll start programs from the command line. For example, if we wanted to show compilation and execution of the “hello, world” program from Kernighan and Ritchie , we might show something like this:
$ gcc g Wall o hello hello.c $ ./hello
The $-sign is the prompt from the shell. We will usually use the following options for the compiler:
.. g. Create information that allows us to use a debugger
. Wall. Issue lots of warnings
. o <outfile>. Put the executable in the file named outfile
When we’re timing programs, we usually tell the compiler to optimize the code by using the O2 option.
In most systems, user directories or folders are not, by default, in the user’s execution path, so we’ll usually start jobs by giving the path to the executable by adding ./ to its name.