Sidebar
3-4: Rapidly Approaching Zero
Y2K or the year 2000 problem, when dire
consequences were forecast for computer clocks with 2- digit year fields that
would turn from 99 to 00, was an ideal problem: The threat was easy to define,
time of impact was easily predicted, and plenty of advance warning was given.
Perhaps as a consequence, very few computer systems and people experienced
significant harm early in the morning of 1 January 2000. Another countdown
clock has computer security researchers much more concerned.
The time between general knowledge of a
product vulnerability and appearance of code to exploit that vulnerability is
shrinking. The general exploit timeline follows this sequence:
·
An attacker discovers a
previously unknown vulnerability.
·
The manufacturer becomes aware
of the vulnerability.
·
Someone develops code (called
proof of concept) to demonstrate the vulnerability in a controlled setting.
·
The manufacturer develops and
distributes a patch or wor-around that counters the vulnerability.
·
Users implement the control.
·
Someone extends the proof of
concept, or the original vulnerability definition, to an actual attack.
As long as users receive and implement
the control before the actual attack, no harm occurs. An attack before
availability of the control is called a zero day exploit. Time between proof of
concept and actual attack has been shrinking. Code Red, one of the most
virulent pieces of malicious code, in 2001 exploited vulnerabilities for which
the patches had been distributed more than a month before the attack. But more
recently, the time between vulnerability and exploit has steadily declined. On
18 August 2005, Microsoft issued a security advisory to address a vulnerability
of which the proof of concept code was posted to the French SIRT (Security
Incident Response Team) web site frsirt.org. A Microsoft patch was distributed
a week later. On 27 December 2005 a vulnerability was discovered in Windows
metafile (.WMF) files. Within hours hundreds of sites began to exploit the vulnerability
to distribute malicious code, and within six days a malicious code toolkit
appeared, by which anyone could easily create an exploit. Microsoft released a
patch in nine days.
But what exactly is a zero day exploit?
It depends on who is counting. If the vendor knows of the vulnerability but has
not yet released a control, does that count as zero day, or does the exploit
have to surprise the vendor? David Litchfield of Next Generation Software in
the U.K. identified vulnerabilities and informed Oracle. He claims Oracle took
an astonishing 800 days to fix two of them and others were not fixed for 650
days. Other customers are disturbed by the slow patch cycleOracle released no
patches between January 2005 and March 2006 [GRE06]. Distressed by the
lack of response, Litchfield finally went public with the vulnerabilities to
force Oracle to improve its customer support. Obviously, there is no way to
determine if a flaw is known only to the security community or to the attackers
as well unless an attack occurs.
Shrinking
time between knowledge of vulnerability and exploit puts pressure on vendors
and users both, and time pressure is not conducive to good software development
or system management. The worse problem cannot be controlled: vulnerabilities
known to attackers but not to the security community.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.