"Crawling" Toward SDL

One of the phrases I often hear during vision and strategy planning meetings at Microsoft is “What is the crawl, walk, run?” We use this phrase to differentiate the initial activities that will get us quickly moving toward our larger goals and then supplement them with other activities that may require longer preparation or planning. As I help non-Microsoft companies implement SDL into their development lifecycles, this “crawl” phase toward full adoption of SDL is very important. Usually some person in an organization picks up on the principles of SDL and is ready to roll them out immediately. However, that person usually is faced with competing interests that complicate full adoption: the team is mid-stream in development, short on budget, or management wants to see clear evidence before investing in the changes to support full SDL adoption.

 

 

Since we usually focus on how to roll out the full Lifecycle, I want to take a shot at defining what it means to start “crawling” toward SDL. One very important note before I start. What I describe below is not Microsoft’s SDL process. It matches some of the tools and principles, but does not encompass the holistic application security solution provided by SDL.

 

 

 

In my mind, to start crawling toward SDL, you need to execute on some of the core principles. They obviously need to be low-cost and effective. So, I want to summarize these into three components.

 

 

 

1.       Detailed awareness of your architecture and its attack surface.

 

2.       Tools that will perform security analysis on your application.

 

3.       Results that show how the analysis resulted in improved security.

 

 

 

The good news is that you can attain these components with tools that are already available. The one consistent minimum requirement is that your code compiles/builds within Visual Studio 2005 SP1. The SP1 piece of this is important because some of the important defenses I discuss below were first made available in that version. Let’s look at some of the tools you can use to get “crawling” toward SDL today:

 

 

 

Detailed awareness of your architecture and its attack surface

 

Threat Modeling

Even if you are past the design phase, assign someone to do a retrospective model (perhaps as part of a pre-release review). This will likely give you a better understanding of your overall architecture and uncover holes in places you may have inadvertently overlooked.

 

 

 

Tools that will perform security analysis on your application

 

This is probably one of the most often discussed topics around SDL, so I’ll spend some time providing more detail. Let’s break this down into how it impacts differing parts of your team or organization: developers, testers, and operation.

 

 

Developers

You should start by strengthening your compiler defenses. Depending on whether you are writing native or managed code, these will differ.

 

 

For C and C++ code:

 

Strengthen your compiler defenses

 

·         Use the latest compiler and linker because important defenses are added by the tools

 

·         If using Visual C++,  use Visual Studio 2005 SP1 or later

 

·         Compile with appropriate compiler flags

 

·         Compile clean at the highest possible warning level

 

·         Compile with –GS to detect stack-based buffer overruns

 

·         Link with appropriate linker flags: /NXCompat to get NX defenses, /DynamicBase to get ASLR,  and /SafeSEH to get exception handler protections

 

Do not use banned APIs in new code

 

·         Use #include “banned.h” header file to find banned C/C++ functions in your code quickly. This header file is included in the companion disk in the Security Development Lifecycle book.

 

·         Compile regularly with /W4 and fix all C4996 (banned C Runtime function) warnings

 

 

For all Languages:

 

Strengthen your compiler defenses

 

·         Use the latest compiler, linker and libraries because defenses are added by the tools and code

 

o   If using C#, use  C# v2.0 or later and if using VB.Net use 8.0 or later

 

·         Use .NET Framework 2.0 or later

 

·         Do not use weak crypto in new code

 

o   Use only AES, RSA and SHA-256 (or better)

 

·         Prevent XSS vulnerabilities by using filtering and escaping libraries around all Web output

 

·         Secure your SQL script by only using prepared SQL statements – no string concatenation or string replacement

 

Run these tools habitually

 

·         PREfast (in Visual Studio 2005, use the /analyze compiler option) – a static analysis tool that identifies defects in C/C++ programs and enables you to perform quick desktop error detection on small code bases

 

·         FxCop – an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies

 

·         Application Verifier (AppVerif) – detect and help debug memory corruptions, critical security vulnerabilities, and limited user account privilege issues.

 

 

Testers

James Whittaker has covered testing in the SDL on this blog in the past. In a “crawl” scenario, you need to keep it simple while maximizing the value of output. I would recommend focusing on fuzz testing. This is likely something you will need to invest some time creating.  Scott Lambert’s article on Fuzz Testing at Microsoft and the Triage Process provides some good guidance on how to think through what type(s) of fuzzing to exercise against your application.

 

 

If you choose to expand beyond fuzz testing, I would point you back to James’ article on the broader topic of Testing in SDL. You may come to the conclusion that expanded security testing  may come later in your “walk” or “run” phases, but I would take some time to think through testing even while “crawling” to ensure you are getting broad enough coverage for your application. James’ article highlights the three-pronged approach to security testing we use at Microsoft. You should use these three approaches to ensure your own fuzz testing is comprehensive.

 

 

 

1.       Attacks against the application’s environment.

 

2.       Direct attacks against the application itself.

 

3.       Indirect attacks against the application’s functionality.

 

 

Results that show how the analysis resulted in improved security

 

Response planning

Protecting your customers is the entire reason for focusing energy on application security.  If there are holes in your code that you don’t uncover, someone else will. It is absolutely critical that you are prepared to respond rapidly and protect your customers. It is equally important that you construct your  response plan to serve as a front-line barometer for detecting the resilience of your security design  and what pieces of your applications security should be proactively bolstered to  address externally reported vulnerabilities.  The knowledge you harvest from these security incidents (typically through root cause analysis) is the primary way to improve your code and security tooling for the future.  Do everything you can to learn lessons from the vulnerabilities others find. If you don’t have a response plan in place, you need to get one in place as soon as possible. If you don’t know where to start, take a look at how our own Microsoft Security Response Center does it and fit to your scale or pick up the Security Development Lifecycle book and dig into the four-step process outlined.

 

 

The four steps of the emergency response process:

 

1.       Watch

 

2.       Alert and Mobilize

 

3.       Assess and Stabilize

 

4.       Resolve

 

 

Bugs, Bugs, Bugs

Gathering evidence that clearly shows your work has improved the security of your application is always a challenge. Trying to keep it lightweight adds to that challenge. The most effective way to create traceable and practical evidence without a lot of overhead is detailed management of security issues in your bug database.  The key here is that your bug database is configurable and able to be queried in a variety of ways to pull out this data. From the time you set out to implement this plan, be strict in tracking every discovery from threat modeling, the mitigations to those threats, and every bug you expose in tool analysis. This library of security bugs will give you an easy way to go back and gather evidence that shows the quantity of issues you discovered, the mitigations you used, and the impact the changes had on your application.

 

 

I have provided a fairly detailed view of these components. As I indicated, many of these defenses are available for you in Visual Studio 2005 SP1 or various linked resources above. If you are unsure whether you are taking advantage of all available defenses in your development tools, take the time to check.

 

 

It is my hope that some of you can use this scaled back entry into the principles of SDL to get moving toward improved security assurance. In the non-Microsoft SDL engagements I have been involved in, we have seen these steps effectively establish a baseline architectural understanding of your application security and identify critical weaknesses while providing solid evidence to support the decision to “run” forward into full SDL adoption.

 

 

[I want to thank Michael Howard for providing some of the key data for the Developer pieces in this article.]

edit – 7/28/08: adding tag for Crawl Walk Run series.

Join the conversation

5 comments
  1. Philip.Agcaoili

    Fizzing and Threat Modeling are little advance for Crawling.

    I’d also add that Awareness, Training, and Education are necessary in this phase.

    The adoption of tools to Verify what was trained is a great idea for this phase. Many folks are still evolving from Blackbox, application security testing tools, so the move to source code analysis is  major hurdle and an organizational shift.

    There is a huge element that is resistant to this shift, so good luck Crawling!

  2. Philip.Agcaoili

    Do you have good references for compiler defenses, banned APIs, and secure, reusable libraries?

    A basic, publicly available reference wil help many in the Crawling phase.

    Thanks,

    Phil Agcaoili

  3. jdallman

    Phil, my apologies for the delayed response. Thank you for taking the time and please feel free to keep up the conversation!

    You make some good points about raising awareness through training and education. Based on what I’ve seen, that is typically quite a challenge for a company (or small group of people within a company) to get going… so I left it as an informal component until "Walking".

    Although Fuzzing and Threat Modeling may be perceived as more advanced practices, I think that Threat Modeling in particular is one of the most effective ways to raise awareness of security risks in your products. I would encourage anyone crawling to perform threat modeling as a way to educate themselves in security practices as well as the practical security of their own product.

    At "crawl", I suspect any fuzzing would need to be either outsourced or basic and manual. However, any amount of fuzzing that can be done will likely find bugs to fix. These bugs in turn becomes your evidence for broader fuzzing efforts as you mature.

  4. Philip.Agcaoili

    Thanks for the info.These are the examples that Michael Howard forwarded to me as well. Go figure.

    We’re driving SOA security standards, so anything that you have to assist us here as well would be appreciated.

Comments are closed.