On Disingenuous Analysis and Transparency

So, I am perusing security blogs this weekend and I read this interesting entry by Mark Cox of Red Hat about transparency where he says “…the Microsoft PR engine has been churning out disingenuous articles and doing demonstrations based on vulnerability count comparisons.” 

 

In general, I think Mark’s a good guy with a hard job, doing the best he can to be open and transparent.  In my opionion, his team does a far better job with security advisory communications than, for example, Novell SuSE.  However, with an accusation such as the above, I think Mark is being … now, what’s that word?  Oh, yes … disingenuous.

 

Now, I look at security metrics a lot.  A lot.  Why?  When you are trying to drive change, it is useful to measure your progress.  In the case of Microsoft’ security improvement efforts, this means I use the metrics to look at two basic types of comparisons:

1.  Microsoft products against previous releases, where there was some process change targeted at helping improve security, and

2.  Microsoft products against similar industry offerings

 

Note that I view both of these as a way of benchmarking, not an absolute indication of security.  So, if we applied SDL to a product and can measure an improvement over the previous version that did not benefit from SDL, it shows progress, but does not mean the new version “is secure.”

 

Similarly, if I can objectively compare with a similar industry offering, it gives me some relative measure of security between the two, but does not mean that either one could not be the foundation for a critical deployment given the right skills and resources applied.  Let’s face facts, a Unix security expert will be better able to reduce risk on a Unix system than a Windows system.  Similarly, a Windows expert may lack key skills to assure ongoing protection of a Linux or Unix system.

 

Additionally, I think standards should be thought out and applied consistently.  So, if you apply a certain methodology, you should apply it consistently to each system you analyze.  I also think you should analyze things from multiple points of view.  Apply the swap test to a finished analysis by switching product identities.  Would you come to the same conclusion if the product identities were switched, or are you biasing your analysis?  Be repeatable.  Can someone else duplicate the work with the information available?  Will they get the same results?

 

Okay, this probably all seems like an esoteric discussion which would be much more interesting if I used specific examples.  I’ll get there, I promise, but think about what else would be important to you in performing comparative security analysis and share your thoughts before I get started.

 

I look forward to hearing your input on each of these as I move forward with specific examples, and ask you – is there a study, chart or paper you think we should dissect here on the blog?  By Microsoft?  By Red Hat?  Someone else?

 

Jeff

About the Author
Jeff Jones

Principal Cybersecurity Strategist

Jeff Jones a 27-year security industry professional that has spent the last decade at Microsoft working with enterprise CSOs and Microsoft's internal teams to drive practical and measurable security improvements into Microsoft products and services. Additionally, Jeff analyzes vulnerability trends Read more »