Common Criteria and answering the question ‘Is it Safe’

Hi all, Eric Bidstrup here.

 

One of the areas that our group is also involved is in industry standards regarding security assurance, and Common Criteria (aka ISO 15408) is the standard internationally recognized by 24 governments (including the US, UK, Germany, Japan, and others). It’s interesting to consider that while all consumers of computer software want to have both confidence and detailed information about the security of software they want to purchase (or have already purchased), Common Criteria (CC) has failed to gain broad acceptance and recognition in the private sector or in any community beyond government agencies. Microsoft has been very vocal in the CC community on suggestions as to why that is and how to modify CC for broader commercial acceptance, and so I thought I’d share some of those thoughts here. Currently, Common Criteria fails to meet customer needs as a useful indicator of the likelihood of security vulnerabilities in software.

 

At a very fundamental level, when someone in either the private sector or from a government agency considers purchasing or using a software product, one of the questions that may come up is “Is it Safe”? (Apologies for the lame and over-usedMarathon Man” movie reference).  I choose this imprecise reference to “safe” since most people don’t think deeply about what it means beyond “I don’t want bad things to happen to me or people/property/data I care about”. In terms of software security, all of the following most people would think of as being “bad”: Viruses, worms, malware, hackers, criminals, and espionage.  These items listed have one thing in common – all of those bad things require a weakness (a “vulnerability”) in the software used, and finding a way to exploit that vulnerability for a nefarious purpose.  Security professionals have various frameworks on how to define “safe” that usually factor in some of the following considerations:

 

1)      Value of protected assets

 

2)      Assumptions about the sophistication of and level of resources available to an attacker. Defining “attacker” can cover a spectrum that ranges from a well intentioned but misguided employee to people we commonly think of as “hackers” to employees of a hostile intelligence service.

 

3)      Level of confidence/assurance that is sought by people responsible for protecting the assets noted in #1 from the attackers noted in #2.

 

Obviously different customers will have different criteria for determining “Is it Safe”? Small businesses will have different needs from large multinational corporations who will have different needs from government security agencies. To answer that question, security professionals require time (usually at substantial cost) to analyze not only the considerations above, but also examine in depth the software itself, its intended use, the environment in which it will be used, and a variety of other factors. Consumers who are not security savvy will likely make judgments based on sound bites from the media and intuition rather than any specific data or analysis. The Internet can be a dangerous place; a computer with vulnerable software is an easier target than one without such software.

 

When considering what types of software vulnerabilities could occur, there are three general categories of potential vulnerabilities:

 

1)      Design vulnerabilities – software that was not designed adequately to meet security requirements, needs, or expectations.

 

2)      Implementation vulnerabilities – software that exposes risk based on implementation deficiencies.

 

3)      Deployment vulnerabilities – software that was misconfigured in deployment as to expose risk that might have been prevented by other configurations.

 

Let’s talk about each of these in the context of Common Criteria.

 

For classes of products where protection profiles (PP) have been defined, CC arguably does a reasonable job is addressing design vulnerabilities. A protection profile outlines customers’ interests and needs in terms of security features/functionality. Smart cards are a great example where the threat and risks to a class or products have been well defined and reflected in the protection profiles. Operating Systems and DBMSs are other examples where useful protection profiles have been created. CC as currently applied is arguably deficient is in two ways: 1) PPs don’t currently exist for many categories of products (Mobile devices and instant messaging applications for example). 2) An evaluation is not internationally “required” to evaluate a given product against a PP (although the US has such policies). The former would be a solvable problem if industry were willing to step in and help lead creation of protection profiles where none exist currently as the smart card vendors have done. Solving the latter would require more fundamental policy changes by the governing bodies of Common Criteria, and presumes a solution exists to the former.

 

Where Common Criteria arguably does NOT do a reasonable job is in addressing implementation vulnerabilities. While CC does have some limited provisions that attempt to address this concern, experience in the real world offers ample evidence that CC fails to meet customer (both government and private sector) needs and expectations for assurance that a given product does not contain implementation vulnerabilities that expose customers to risk. It has been our experience that customers typically don’t care whether they are exposed to risk from a design vulnerability or an implementation vulnerability, they care that they are exposed to risk. Period. When customers ask “Is it Safe?” they expect software that can be deployed and maintained to operate securely in the face of adversarial activity. The chairman of the Common Criteria Development Board (David Martin) agreed with these points in his presentation at the ICCC in Rome this year. It’s not that CC can’t do this; it’s just that it currently doesn’t. This is the area where Steve Lipner, myself, and others have pointed out repeatedly (maybe too repeatedly) that CC needs to improve.

 

As I mentioned above, Common Criteria also falls short meeting customer needs in producing useful information that addresses deployment vulnerabilities. A CC evaluation is conducted against a specific configuration of a product known as the “Target of Evaluation” (aka TOE). Information in the TOE is expressed using CC language and syntax which is typically not digestible by average IT personnel. The TOE is defined by the vendor, and may or may not reflect the product’s default installation configuration, or other common configurations reflecting how the product is deployed in the real world. In many examples, the guidance on deploying software securely is at odds with how it is used in the real world.  For example, as I recall, a few years ago, an operating system was evaluated under the US Controlled Access Protection Profile in a configuration that had only an FTP server (configured for anonymous access) enabled. This sort of fiction doesn’t meet customer needs.

 

One of the other key challenges of Common Criteria today is the timeliness of completing CC evaluations. It typically takes 12 to 24 months or longer to complete an evaluation at the highest assurance levels (EAL4) that can be attained by general purpose commercial software products. Since software vendors will typically release new major versions of their products at 18-36 month intervals, this creates a dilemma for customers in that CC evaluation results typically lag about one version behind the currently available version of a given product. Hence, adding time and effort to address current CC deficiencies to a process that is already too slow to meet customer needs creates a real quandary. 

 

This all leads up to asking some fundamental question about the goals and purpose of Common Criteria. If CC simply validates conformance to a set of documented security feature requirements, then CC needs to better communicate this limited scope to its customers in order to set expectations that it will “help keep honest people honest” – but is incomplete or inadequate in terms of assurance of the security of assets on a system.  (CC is good in some bounded scenarios such as smart cards, but much less good in scenarios with larger scale/complex software.)  If CC aspires to truly meet customer needs to answer the question “Is it Safe?” – then CC needs to consider the real world evidence in terms of vulnerability rates found in CC evaluation products to discover it is currently failing to meet customer needs in that regard. Microsoft has had several products evaluated under CC (Microsoft Internet Security and Acceleration Server (ISA), Microsoft SQL Server 2005 SP1, Microsoft Exchange Server 2003, and several versions of Microsoft Windows). However, CC has been an insufficient answer to the question our customers ask “Is it Safe?”  The Security Development Lifecycle is what has made the difference in enabling Microsoft to successfully reduce vulnerabilities in our products.

 

If customers expect a real-world answer to the question “Is it Safe?” to be answered by Common Criteria, then Common Criteria must change.

About the Author
SDL Team

Trustworthy Computing, Microsoft

Join the conversation

4 comments
  1. asteingruebl

    Eric,

    Thanks for the great post.  Another issue here is that we can’t do a CC evaluation for a group of products operating together.  CC evaluates single products, not deployed systems (at least in general anyway, it could be extended.)

    We can’t simply take two products evaluated at EAL4 and stick them together and have an EAL4 system.  So, we have to solve some of the composability problems in order to make this better.

    To your point, we’ve seen lots of operating systems evaluated to a certain level of assurance with all networking turned off, thus rendering them useless in most scenarios.

    Have you had much discussion along those lines in the CC forums?

  2. sdl

    Yes, “composibility” is yet another dimension of the problem space (maybe subject for another blog posting), as the interaction between CC evaluated components is not really addressed currently. This subject has definitely come up in various discussions, but without much in terms of a solution. While I agree this is an important areas – I’d assert that a solution to basic issues noted in the post (assurance of individual products) is needed before the “composibility” challenge can successfully be addressed.

  3. FraserH

    I would agree that CC is not the silver bullet solution and whilst it provides evidence of how “safe” a system it in itself does not provide the complete answer.  I agree that the vulnerabilities described in your blog.  However, I feel that it would become increasingly costly for a supplier to try and move into the implementation and deployment vulnerability space with any degree of confidence that the investment is going to deliver an appropriate return ie the output will be usable by the system integrator.  I believe there is an onus on system integrator to understand the security context of whatever capability they are delivering and specifically the security risks/vulnerabilities associated with the implementation and deployment of the capability they are delivering.  The challenge that we face is to be able to relate the ToE into whatever methodology the system integrator.  I feel that providing a framework which relates the CC assessment of design vulnerabilities risk to a wider implementation/deployment security risk assessment process may yield a better overall return.  

  4. sdl

    Thanks Fraser. Your comment about costs of software vendors investing in addressing implementation and deployment is well taken. At Microsoft, we’ve adopted a very pragmatic view in investing in tools and techniques that prove effective at identifying vulnerabilities such that they can be effectively corrected. We feel the costs of doing this are less expensive than the alternative of losing business due to customers unhappiness with our software. Deployment guides are also challenging, but if the software vendor does not provide information on how to configure the software securely – who will? Microsoft provides such information, as can be found for Vista at the URL below.

    I’d agree with you that it is a challenge is to provide (in the ToE or via other methods) information for a system integrator or any IT personnel to make informed security decisions on how to deploy a given software product in the context of the system in which the software is deployed. Understanding the potential interactions of a given software with product with all other software on the system can be challenging, and raises some interesting question about what the expected knowledge and skill level of a system integrator (or other IT personnel) should be in order to have confidence they can successfully determine this…

    Windows Vista Security Guide:

    http://www.microsoft.com/downloads/details.aspx?familyid=a3d1bbed-7f35-4e72-bfb5-b84a526c1565&displaylang=en

Comments are closed.