Home | | Multi - Core Architectures and Programming | The Motivation for Multicore Processors

Chapter: Multicore Application Programming For Windows, Linux, and Oracle Solaris : Hardware, Processes, and Threads

The Motivation for Multicore Processors

Microprocessors have been around for a long time. The x86 architecture has roots going back to the 8086, which was released in 1978. The SPARC architecture is more recent, with the first SPARC processor being available in 1987.

The Motivation for Multicore Processors

 

Microprocessors have been around for a long time. The x86 architecture has roots going back to the 8086, which was released in 1978. The SPARC architecture is more recent, with the first SPARC processor being available in 1987. Over much of that time per-formance gains have come from increases in processor clock speed (the original 8086 processor ran at about 5MHz, and the latest is greater than 3GHz, about a 600× increase in frequency) and architecture improvements (issuing multiple instructions at the same time, and so on). However, recent processors have increased the number of cores on the chip rather than emphasizing gains in the performance of a single thread running on the processor. The core of a processor is the part that executes the instructions in an applica-tion, so having multiple cores enables a single processor to simultaneously execute multi-ple applications.

 

The reason for the change to multicore processors is easy to understand. It has become increasingly hard to improve serial performance. It takes large amounts of area on the silicon to enable the processor to execute instructions faster, and doing so increases the amount of power consumed and heat generated. The performance gains obtained through this approach are sometimes impressive, but more often they are rela-tively modest gains of 10% to 20%. In contrast, rather than using this area of silicon to increase single-threaded performance, using it to add an additional core produces a processor that has the potential to do twice the amount of work; a processor that has four cores might achieve four times the work. So, the most effective way of improving overall performance is to increase the number of threads that the processor can support. Obviously, utilizing multiple cores becomes a software problem rather than a hardware problem, but as will be discussed in this book, this is a well-studied software problem.

 

The terminology around multicore processors can be rather confusing. Most people are familiar with the picture of a microprocessor as a black slab with many legs sticking out of it. A multiprocessor system is one where there are multiple microprocessors plugged into the system board. When each processor can run only a single thread, there is a relatively simple relationship between the number of processors, CPUs, chips, and cores in a system—they are all equal, so the terms could be used interchangeably. With multicore processors, this is no longer the case. In fact, it can be hard to find a consensus for the exact definition of each of these terms in the context of multicore processors.

 

This book will use the terms processor and chip to refer to that black slab with many legs. It’s not unusual to also hear the word socket used for this. If you notice, these are all countable entities—you can take the lid off the case of a computer and count the num-ber of sockets or processors.

 

A single multicore processor will present multiple virtual CPUs to the user and oper-ating system. Virtual CPUs are not physically countable—you cannot open the box of a computer, inspect the motherboard, and tell how many virtual CPUs it is capable of sup-porting. However, virtual CPUs are visible to the operating system as entities where work can be scheduled.

 

It is also hard to determine how many cores a system might contain. If you were to take apart the microprocessor and look at the silicon, it might be possible to identify the number of cores, particularly if the documentation indicated how many cores to expect! Identifying cores is not a reliable science. Similarly, you cannot look at a core and iden-tify how many software threads the core is capable of supporting. Since a single core can support multiple threads, it is arguable whether the concept of a core is that important since it corresponds to neither a physical countable entity nor a virtual entity to which the operating system allocates work. However, it is actually important for understanding the performance of a system, as will become clear in this book.

 

One further potential source of confusion is the term threads. This can refer to either hardware or software threads. A software thread is a stream of instructions that the processor executes; a hardware thread is the hardware resources that execute a single soft-ware thread. A multicore processor has multiple hardware threads—these are the virtual CPUs. Other sources might refer to hardware threads as strands. Each hardware thread can support a software thread.

 

A system will usually have many more software threads running on it than there are hardware threads to simultaneously support them all. Many of these threads will be inac-tive. When there are more active software threads than there are hardware threads to run them, the operating system will share the virtual CPUs between the software threads. Each thread will run for a short period of time, and then the operating system will swap that thread for another thread that is ready to work. The act of moving a thread onto or off the virtual CPU is called a context switch.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Multicore Application Programming For Windows, Linux, and Oracle Solaris : Hardware, Processes, and Threads : The Motivation for Multicore Processors |


Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.