Other Parallelization Technologies
Previous chapters have dealt with some of the mainstream approaches to developing parallel applications. There are many alternative ways of producing applications that take advantage of multicore processors. This chapter introduces a number of alternative approaches ranging from the use of GPU hardware by OpenCL and CUDA to the C++ library provided by Intel’s Threading Building Blocks.
This chapter also covers some cluster technologies such as MPI. Although running a cluster of machines is outside the scope of this text, it is interesting to realize that a sin-gle machine can now offer an equivalent number of processors as might have been found in a cluster a few years ago. Although most users may not experience using an actual cluster, some of the technologies that are appropriate for clusters are now also appropriate for single systems.
By the end of the chapter, you should have a good appreciation for some other approaches to parallelization. You will also have some understanding of what the strengths and weaknesses are for the various approaches and also have some knowledge of how to write code to exploit the different methods.