WHY WE’RE BUILDING PARALLEL SYSTEMS
Much of the tremendous
increase in single processor performance has been driven by the ever-increasing
density of transistors—the electronic switches—on integrated circuits. As the
size of transistors decreases, their speed can be increased, and the overall
speed of the integrated circuit can be increased. However, as the speed of
transistors increases, their power consumption also increases. Most of this power
is dissipated as heat, and when an integrated circuit gets too hot, it becomes
unreli-able. In the first decade of the twenty-first century, air-cooled
integrated circuits are reaching the limits of their ability to dissipate heat.
Therefore, it is becoming
impossible to continue to increase the speed of inte-grated circuits. However,
the increase in transistor density can continue—at least for a while. Also, given the
potential of computing to improve our existence, there is an almost moral
imperative to continue to increase computational power. Finally, if the
integrated circuit industry doesn’t continue to bring out new and better
products, it will effectively cease to exist.
How then, can we exploit the
continuing increase in transistor density? The answer is parallelism. Rather than building
ever-faster, more complex, monolithic processors, the industry has decided to
put multiple, relatively simple, complete processors on a single chip. Such
integrated circuits are called multicore proces-sors, and core has become synonymous with
central processing unit, or CPU. In this setting a conventional processor with
one CPU is often called a single-core system.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.