Linus’s Law aka "Many Eyes Make All Bugs Shallow"

How many of you have heard “many eyes make all bugs shallow”?  My guess is that many of you have and that it may have been in conjunction with an argument supporting why Linux and Open Source products have better security.  For example, Red Hat publishes a document at www.redhat.com/whitepapers/services/Open_Source_Security5.pdf, which they commissioned from TruSecure (www.trusecure.com) which has a whole section called “Strength in Numbers: The Security of “Many Eyeballs” and says:

The security benefits of open source software stem directly from its openness. Known as the “many

eyeballs”theory,it explains what we instinctively know to be true – that an operating system or application

will be more secure when you can inspect the code,share it with experts and other members of your

user community,identify potential problems and create fixes quickly.

 

It reads pretty well, but there are a few small problems.  For one, nothing really ties the second sentence (the key one) back to the first one.  Secondly, the ability (can) to inspect code does not confirm that it actually gets inspected.  Let me emphasize by applying similar marketing speak to a similar claim for closed source:

 

The security benefits of closed source software stem directly from its quality processes. Known as quality assurance, it explains what we instinctively know to be true – that an operating system or application

will be more secure when qualified persons do inspect the code, [deleted unnecessary] identify potential problems and create fixes quickly.

 

I would argue that both statements are equally true or false, depending on the reality behind the implied assumptions.  For example, if qualified people are inspecting all parts of the open source with the intent of finding and fixing security issues, it is probably true.  For the latter, if a closed source org does have a good quality process, they are likely finding and fixing more security issues than if they did not have that process.

 

Going Back to the Source:  The Cathedral and the Bazaar

 

Now I’ll ask a different question – how many of you have actually read The Cathedral and the Bazaar (CATB) by Eric S. Raymond (henceforth referred to as ESR)?  Shame on you, if you have not.  It is really interesting, and to me, it asks more interesting questions than it answers … though I’ll try not to digress too much or too far.  Keeping to the core idea I want to discuss, let’s look at the lesson #8 in the CATB, as quoted:

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

Or, less formally, “Given enough eyeballs, all bugs are shallow.” I dub this: “Linus’s Law”.

 

Even these statements have some implicit assumptions (ie, the code churn doesn’t cause new problems quicker than the old ones are solved), but as I read through the lead in context and rule #8, I can’t find anything to disagree with.  What I will note is that nothing in this limits his observation to Open Source.  As many later references use the less formal “given enough eyeballs” paraphrase, it does mentally prompt one to think about visual inspection, however, the original lesson doesn’t refer to visual inspection at all!

 

Though ESR was making observations and drawing lessons from Linus’ Linux experience and his own fetchmail experience, I assert that his lessons can be applied more broadly to any software.  Going a bit further in the text, we find another important part of the discussion:

My original formulation was that every problem “will be transparent to somebody”. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. “Somebody finds the problem,” he says, “and somebody else understands it. And I’ll go on record as saying that finding it is the bigger challenge.”

 

So, in finding and fixing issues, you need:

·         “Many eyes” identifying the issues, or a large enough beta-tester base (to take from lesson #8) so that almost every problem will be characterized, and

·         Enough developers working on fixing issues so that a fix can be developed and deployed

 

ESR chronicles a lot of interesting stuff in CATB and enumerates them as lessons, but one key one he does not elaborate upon is the enabling ability to communicate cheaply and quickly with his users/co-developers.  At the time, he used a mailing list.  UUCP news and public file servers were also available for communication and for sharing code and files.  What did this allow?  It allowed him to pretty easily find and connect with the roughly 300 people in the Western developed nations that shared his interest in an improved pop client / fetchmail.  Even 10 years prior and this would have been much more difficult.  But I digress too much … suffice it to say that cheap and easy communication and sharing made a distributed, volunteer, virtual team possible.

 

Applying the “Many Eyes” Lessons To Commercial Software

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

ESR contrasted between two testing models.  Rather than paraphrase, it seems simplest to quote what he says next in CATB:

In Linus’s Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you’ve winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

 

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

 

Now in my experience with commercial products, I can honestly say I never thought of development problems as either deep or shallow.  I thought of flaws as being across a spectrum, where some where simpler and easier to find and others might be deeper and have more challenging pre-conditions to replicate (e.g. timing, state).  I think that would apply to open or close source. 

 

So, ultimately, my analysis of what ESR describes is different in that I see the key difference as time and resources.  The Bazaar model (as he described) created a situation where more resources for both finding and fixing bugs were applied in parallel.  The Cathedral model (as he described) had (by implication) fewer resources that (therefore) needed to work over a longer period of time to achieve a similar level of quality.  This resource analysis makes sense to me, especially if you leave the models out of the equation for a moment.

 

Let’s step back.  What if you had an Open Source project working on a product where their were 5 core developers and about 20 co-developing users?  What if you had a comparable Closed Source project with 50 developers and 50 testers?  Assume both products have 500 active users over a one year period reporting problems and requesting enhancements.  Does it seem likely that the Open Source project will find and fix more bugs simply because it is Open Source?  No.  The number of “eyes” matter, but so do the number of actively contributing developers?  This is consistent with what ESR says (…a large enough beta-tester and co-developer base…), but is not consistent with the common usage of the “many eyes” theory as quoted frequently in the press.

 

How can commercial companies apply this?  First, set up a process that facilitates reasonably frequent releases to large numbers of active users that will find and report problems.  Next, ensure that you have enough developers to fix the reported issues that meet your quality bar.  The CATB also identifies a need for problem reports to have an efficiency of communication that makes the problems easy to replicate and enables the developers to quickly solve the problem.  Finally, there are several more rules which are about being customer-focused, which any product manager would endorse:

7. Release early. Release often. And listen to your customers.

10. If you treat your beta-testers as if they’re your most valuable resource, they will respond by becoming your most valuable resource.

11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.

 

The “Many Eyes” of Microsoft

 

Finally, I would like to think about these issues in the context of how Microsoft currently releases products.

 

First, let’s take the core “many eyes” and consider “Given a large enough beta-tester and co-developer base…”  In CATB, Eric mentions that at a high, he had over 300 people on his active mailing list contributing to the feedback process.  There are multiple levels at which the Microsoft’ product lifecycle seems to work towards achieving many eyes. 

 

Furthest from the development process are the millions and millions of users.  Take a product like Windows Server 2003, which is the next generation of Windows 2000 Server and you find it has benefitted from every bug report from every users in terms of (what Linus described as the harder problem) making bugs shallow.  In the more recent product generations, the communication process has been advanced in a technical way by Windows Error Reporting (WER aka Watson) and Online Crash Analysis (OCA).  Vince Orgovan and Will Dykstra gave a good presentation on the benefits of WER/OCA at WinHec2004 (which you can read here).   OCA also addresses another problem raised by CATB, that of communicating sufficient detail so a developer can properly diagnose a problem.  One might argue that a large percentage of users do not choose to send crash details back to Microsoft for analysis and that brings us to the next item – Betas.

 

Microsoft releases Beta versions of products that see very high numbers of day-to-day usage before final release.  When Windows XP SP2 was developed and released on a shortened one-year process, it had benefitted from over 1 million Beta users during the process – each one using and installing their own combination of shareware, local utilities, custom developed applications and legacy applications on thousands of combinations of hardware.   I’ve been running Windows Defender (aka Antispyware) along with many other users for about 1.5 years now, through several releases.

 

Even before the external Beta stage, Microsoft employees are helping the product teams “release early and often” by dogfooding products internally.  Incidentally, I am writing this entry using Office 12 Beta running on Windows Vista Beta2.  Dogfood testers may not seem like a lot unless you consider that there are 55,000 employees and well over half of them will probably dogfood more major products.  High numbers of dogfood testers will certainly utilize OCA and will also run the internally deployed stress tools to try and help shake bugs out of the products.

 

There are other mechanisms I won’t go into in detail like customer councils, focus groups, customer feedback via support channels, feature requests, not to mention the Product Development and Quality Assurance teams themselves utilizing a variety of traditional and modern tools to find and fix issues.  The core process has even been augmented as described in The Trustworthy Computing Security Development Lifecycle to include threat modeling and source code annotation.

 

I could go on, but I think you get the picture.  Hopefully, this will stir some folks to think beyond the superficial meaning of “many eyes make all bugs shallow” the next time someone throws out as a blind attempt to assert superior security of Open Source.

 

Jeff

About the Author
Jeff Jones

Principal Cybersecurity Strategist

Jeff Jones a 27-year security industry professional that has spent the last decade at Microsoft working with enterprise CSOs and Microsoft's internal teams to drive practical and measurable security improvements into Microsoft products and services. Additionally, Jeff analyzes vulnerability trends Read more »