Single Task Split Over Multiple Threads
Splitting a single task over multiple threads is often what people think of as parallelization. The typical scenario is distributing a loopâ€™s iterations among multiple threads so that each thread gets to compute a discrete range of the iterations.
This scenario is represented in Figure 3.18 as a system running three threads and each of the threads handling a separate chunk of the work.
In this instance, a single unit of work is being divided between the threads, so the time taken for the unit of work to complete should diminish in proportion to the num-ber of threads working on it. This is a reduction in completion time and would also rep-resent an increase in throughput. In contrast, the previous examples in this section have represented increases in the amount of work completed (the throughput), but not a reduction in the completion time for each unit of work.
This pattern can also be considered a fork-join pattern, where the fork is the division of work between the threads, and the join is the point at which all the threads synchro-nize, having completed their individual assignments.
Another variation on this theme is the divide-and-conquer approach where a prob-lem is recursively divided as it is divided among multiple threads.