This document will be updated as time goes on. It is a repository for questions and answers related to analyses posted on my blog comparing vulnerability counts, days-of-risk and workload vulnerability indices for Windows and Linux distributions. If you have more questions, post them as comments and I’ll update with an answer as appropriate.
Best Regards ~ Jeff
Q1. Why is there a difference in “vulnerability fix events” and “unique vulnerabilities fixed” – what are they and what does that mean?
A unique vulnerability is just what it sounds like, typically represented by a unique CVE identifier (e.g. CVE-2006-0023). A “vulnerability fix event”
is when a patch is released that fixes a vulnerability. There can be multiple fix events for a vulnerability when a vendor does not fix all instances
on the same day. For example, Red Hat fixed CVE-2005-3625, a remote-high vulnerability in the ‘cups’ packages on 1/11/2006 and later fixed the same
vulnerability in the ‘tetex’ packages on 1/19/2006. This results in 1 unique vulnerability, but 2 fix events.
Q2. How do you treat multiple fix events in computing Days of Risk (DoR)?
The calculation of Days of Risk (DoR) treats a vulnerability as unfixed in a software product as long as one instance is
still unfixed. Therefore, simply using the last fix to establish the Days of Risk for a vulnerability is the full DoR for that vulnerability. However,
there is still an issue for the case where a customer may have (to use the example from Q1) ‘cups’ deployed on a server, but not ‘tetex’. The only way
to really handle that case is to study the set of vulnerabilities that apply to a particular instance of a server.
Here is an example using CVE-2005-3625, from Q1 above, which has 44 DoR for Red Hat EL 4. Note that there are two fixes for this vulnrability, one 36 days
after disclosure for ‘cups’ and one 44 days after disclosure for ‘tetex’. This example argues strongly for fixing all instances as soon as possible,
if not possible to fix on the same day.
Q3. What is the workload vulnerability index (wvi) and how is it calculated?
The workload vulnerability index (wvi) is defined by NIST at http://nvd.nist.gov/nvd.cfm?workloadindex as
((number of high severity vulnerabilities published within the last 30 days) +
(number of medium severity vulnerabilities published within the last 30 days/5) +
(number of low severity vulnerabilities published within the last 30 days/20)) / 30
Rather than 30 days, I use the days in a given month or period (e.g. 28 in February)
WVI as a security quality metric (my term) came to my knowledge when a variation of it was used by Mark Cox in “Risk Report: A Year of Red Hat Enterprise
Linux 4″ (http://www.redhat.com/magazine/017mar06/features/riskreport/). Mark chose to use his own rating system rather than the ratings assigned by NIST.
I use the NIST ratings and formula.
Q4. Why do you use the NIST severity ratings rather than vendor ratings?
One might argue that though the vendor rating definitions are very similar, since different sets of people are applying them, they many not be applied
in a consistent manner by each vendor. That is a problem. In order to have a metric based upon a more independent source of ratings, I’ve chosen to use
the NIST ratings as they are applied equally well (or badly) across multiple vendors by NIST, enabling IMO a more objective comparison.
Q5. Would results be vastly different if you used vendor ratings instead?
They could be, but my own observation is that over time, they provide similar *relative* comparisons. That would not have to be, for a couple of reasons.
Vendors could apply the ratings differently. Also, vendors take into account their own architectural or configuration knowledge, which may lessen an
assigned rating. A good example of this is when Microsoft Windows 2000 and Windows Server 2003 (WS2003) are both affected by the same vulnerability, but
because of architectural protections (e.g. NX flag, enhanced security configuration), the vulnerability is less severe on WS2003. By using the
NIST ratings here, the metics will typically reflect the ‘worst case’ rating and WS2003 would not get the benefit of a lesser rating. Similar instances
can be found for Red Hat, typically where a “High, Remote” vulnerability is rated less than “Critical” by Red Hat.
Q6. Why do the Monthly numbers not always add up to be the same as the totals for the period?
When a vendor fixes a vulnerablity in one month and then fixes it in another component in another month, it is counted each time. However, when you
consolidate across the entire period, multiple fix events consolidate, which throws off the summaries. See Q2 about multiple fix events for more info.