Early Days of the SDL, Part Two

Hi everyone, Chris Walker here.

Prior to the Windows Security Push of February 2002, security testing was rather spotty in the Windows organization.  Both the Internet Explorer team and the Internet Information Server (IIS) team had mature efforts oriented to solving the problems that were being reported. But the rest of the organization often viewed security bugs as a distraction from their real work.  


To combat this attitude, Michael Howard had been working on his Writing Secure Code book, and I had a team looking for security bugs through code reviews.  We had some tools in those days for finding bugs, but they were typically used only by our team, partly because many of them required in-depth knowledge to interpret the results.


One notable exception was PREfast which had been developed as a collaborative effort by Tim Fleehart (who was in my team at the time) and by Microsoft Research under Jon Pincus. Tim had been using another MSR technology called AST Toolkit to search for code patterns that were questionable.  Since the code that was seen by this technology is after macro expansion, it uncovered more bugs than were immediately obvious from just reading the raw source code.  Building on the AST technology, PREfast added a nice UI and reporting facility.  We had deployed PREfast onto Windows developers’ desktops in the summer of 2001 and some developers were beginning to see its advantages.


But then the Code Red and Code Red II worms hit Windows in July and August 2001.  It was a wake-up call of major proportions, so in November 2001, Brian Valentine, then VP of Windows, called a bunch of us into a room to say he’d had enough and wanted us to come up with a plan to solve the security problem.  We agreed to meet again with some possible solutions in hand after Thanksgiving.  


The .Net framework was getting ready to release and the team had done a security push across their own product — I think this may have been the first product that devoted concentrated time across the whole product to addressing security. So we met with them to learn what part of their process appeared to work and what didn’t and got some great ideas on how to tackle the problem for the Windows team, which was, of course, on a much grander scale.


Not knowing exactly how much Brian was committed to solving the security problem, we came up with a plan that included as much as possible with the thought that he would dial back the effort to what he thought was a reasonable investment.  To our surprise, when we met again, he told us to do everything we had suggested.  Clearly our priorities had shifted.


The plan was to have the Windows development team concentrate on finding and fixing security bugs for one month, but first we needed to train them in techniques for finding bugs and teach them about the specific types of code issues most likely to generate security holes.  So we set up a series of training sessions for the entire team, training people in groups of 900 per session.  The introductory session was always opened by a vice president, who set the context for the work and emphasized the imperative nature of the results.  The rest of that first 4 hour session was then spent on a grand tour of actual security problems that we had seen – and since worms were on everyone’s minds, stack buffer overruns were explained in detail.


Then for each of the three different disciplines (development, testing and program management) there was a separate 2 hour talk. I created a 2 hour talk for testers based on previous talks I had given to various groups when they asked for help.  I had typically customized each talk to match the group’s needs and time allotment.  This particular talk for the Windows Security Push covered both issues that were well known externally and those that were not so well known.  And, because we didn’t know which processes or tools were going to be effective, we tried all of them. We decided to tell people everything we knew about the kinds of problems that could be encountered, and what tools were out there, and then to let them decide what applied to them and prioritize accordingly.  Though it wasn’t required, many developers attended my testing talks, just as many testers attended the developer talks.  (That in part explains the fact that the class attendance over the week exceeded the population of the Windows division).  


It was decided that everyone in the Windows team should be required to take the training, which meant that even documentation writers, business managers and lab technicians were in the classes, which may have been overkill.  But rules are rules, and it did mean that everyone got a good dose of exactly what we were up against.  


On the other hand, only Windows team members were actually required to attend, so we missed some of our partners in other groups like Office, Visual Studio, SQL and Exchange.  We didn’t have a very robust tracking system for determining which groups contributed to Windows code at the time, nor did we have a lot of infrastructure set up for measuring various teams’ progress.  We did set up a database of source files to be reviewed, and we set up a mechanism to get developers to sign off on high priority files.  But other processes like fuzz testing, or fixing PREfast bugs went unmeasured.


We did eventually begin tracking the various tools and processes that had been set in motion during the push. In later analysis, we discovered which ones worked the best — for example, the elimination of strcpy() function in our code was easy and effective — and those became requirements in the SDL.

About the Author
SDL Team

Trustworthy Computing, Microsoft