Good Design
We saw earlier in this chapter
that modularity, information hiding, and encapsulation are characteristics of
good design. Several design-related process activities are particularly helpful
in building secure software:
using a philosophy of fault
tolerance
having a consistent policy
for handling failures
capturing the design
rationale and history
using design patterns
We describe each of these
activities in turn.
Designers should try to
anticipate faults and handle them in ways that minimize disruption and maximize
safety and security. Ideally, we want our system to be fault free. But in
reality, we must assume that the system will fail, and we make sure that
unexpected failure does not bring the system down, destroy data, or destroy
life. For example, rather than waiting for the system to fail (called passive fault detection), we might
construct the system so that it reacts in an acceptable way to a failure's
occurrence. Active fault detection
could be practiced by, for instance, adopting a philosophy of mutual suspicion.
Instead of assuming that data passed from other systems or components are
correct, we can always check that the data are within bounds and of the right
type or format. We can also use redundancy,
comparing the results of two or more processes to see that they agree, before
we use their result in a task.
If correcting a fault is too
risky, inconvenient, or expensive, we can choose instead to practice fault tolerance: isolating the damage
caused by the fault and minimizing disruption to users. Although fault tolerance
is not always thought of as a security technique, it supports the idea,
discussed in Chapter 8, that our
security policy allows us to choose to mitigate the effects of a security
problem instead of preventing it. For example, rather than install expensive
security controls, we may choose to accept the risk that important data may be
corrupted. If in fact a security fault destroys important data, we may decide
to isolate the damaged data set and automatically revert to a backup data set
so that users can continue to perform system functions.
More generally, we can design
or code defensively, just as we drive defensively, by constructing a consistent
policy for handling failures. Typically, failures include failing to provide a
service
providing the wrong service
or data
corrupting data
We can build into the design
a particular way of handling each problem, selecting from one of three ways:
Retrying: restoring the system to its previous state and performing
the service again, using a different strategy
Correcting: restoring the system to its previous state, correcting
some system characteristic, and performing the service again, using the same
strategy
Reporting: restoring the system to its previous state, reporting
the problem to an error-handling component, and not providing the service again
This consistency of design
helps us check for security vulnerabilities; we look for instances that are
different from the standard approach.
Design rationales and history
tell us the reasons the system is built one way instead of another. Such
information helps us as the system evolves, so we can integrate the design of
our security functions without compromising the integrity of the system's
overall design.
Moreover, the design history
enables us to look for patterns, noting what designs work best in which
situations. For example, we can reuse patterns that have been successful in
preventing buffer overflows, in ensuring data integrity, or in implementing
user password checks.
Prediction
Among the many kinds of
prediction we do during software development, we try to predict the risks
involved in building and using the system. As we see in depth in Chapter 8, we must postulate which unwelcome
events might occur and then make plans to avoid them or at least mitigate their
effects. Risk prediction and management are especially important for security,
where we are always dealing with unwanted events that have negative
consequences. Our predictions help us decide which controls to use and how
many. For example, if we think the risk of a particular security breach is
small, we may not want to invest a large amount of money, time, or effort in
installing sophisticated controls. Or we may use the likely risk impact to
justify using several controls at once, a technique called "defense in
depth."
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.