EVALUATING OPERATING SYSTEM PERFORMANCE
The scheduling
policy does not tell us all that we would like to know about the performance of
a real system running processes. Our analysis of scheduling policies makes some
simplifying assumptions:
■We have
assumed that context switches require zero time.
We also
assumed that processes don’t interact, but the cache causes the execution of
one program to influence the execution time of other programs. The techniques
for bounding the cache-based performance of a single program do not work when
multiple programs are in the same cache. Many real-time systems have been
designed based on the assumption that there is no cache present, even though
one actually exists. This grossly conservative assumption is made because the
system architects lack tools that permit them to analyze the effect of caching.
Since they do not know where caching will cause problems, they are forced to
retreat to the simplifying assumption that there is no cache. The result is
extremely over designed hardware, which has much more computational power than
is necessary. However, just as experience tells us that a well-designed cache
provides significant performance benefits for a single program, a properly
sized cache can allow a microprocessor to run a set of processes much more
quickly. By analyzing the effects of the cache, we can make much better use of
the available hardware.
Li and
Wolf[Li99] developed a model for estimating the performance of multiple
processes sharing a cache. In the model, some processes can be given
reservations in the cache, such that only a particular process can inhabit a
reserved section of the cache; other processes are left to share the cache. We
generally want to use cache partitions only for performance-critical processes
in cache reservations are wasteful of limited cache space. Performance is
estimated by constructing a schedule, taking in to account not just execution
time of the processes but also the state of the cache. Each process in the
shared section of the cache is modeled by a binary variable: 1 if present in
the cache and 0 if not. Each process is also characterized by three total
execution times: assuming no caching, with typical caching, and with all code
always resident in the cache. The always-resident time is unrealistically
optimistic, but it can be used to find a lower bound on the required schedule
time. During construction of the schedule, we can look at the current cache
state to see whether the no-cache or typical-caching execution time should be
used at this point in the schedule. We can also update the cache state if the
cache is needed for another process. Although this model is simple, it provides
much more realistic performance estimates than assuming the cache either is
none x is tent or is perfect. Example 6.9 shows how cache management can
improve CPU utilization. Going in to alow-power mode takes time; generally, the
more that is shutoff, the longer the delay incurred during restart. Because
power-down and power-up are not free, modes should be changed carefully.
Determining when to switch into and out of a power-up mode requires an analysis
of the overall system activity.
■Avoiding
a power-down mode can cost un necessary power.
■Powering
down too soon can cause severe performance penalties.
Re-entering
run mode typically costs a considerable amount of time.
A straight forward method is to power up the system when a request is received. This works as long as the delay in handling the request is acceptable. A more sophisticated technique is predictive shutdown. The goal is to predict when the next request will be made and to start the system just before that time, saving their quest or the start-up time. In general, predictive shutdown techniques are probabilistic—they make guesses about activity patterns based on a proba- bilistic model of expected behavior. Because they rely on statistics, they may not always correctly guess the time of the next activity.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.