Applying SDL Principles to Legacy Code

Hello, this is Scott Stender from iSEC Partners, one of the SDL Pro Network partners.  As security consultants, we at iSEC work with a variety of companies to drive security throughout their development cycle.   Clients with mature security processes ask that we help carry out parts of their process, from requirements analysis to penetration testing.  Other clients need help defining their security processes, and we help define and kickoff a program based on the Microsoft SDL, other defined processes, or variations thereof, depending on the client’s needs and abilities.  Whether participating in an existing process or helping define one, I personally have been lucky enough to have seen my fair share of successes and failures, and it is this perspective that I hope to share in this guest post.


I find that legacy code poses a unique challenge for organizations rolling out a new security process.  Often, the resources dedicated to maintaining older code are a small fraction of those devoted to new features or products.  Furthermore, the original developers for such features have often moved on, leaving no subject matter experts to drive reviews.  The astute reader will ask “How do I apply the principles of the Microsoft SDL to legacy code when I have no development resources and nobody knows how it works?”


The answer is “Start small, and build expertise over time.”


A Rising Tide Lifts All Boats


The best thing a security engineering team can do to improve security in the short term is to drive code quality, and the first step in this process is to define and enforce a secure coding standard.  This helps on two fronts: 


1.       It will improve code quality and reduce implementation flaws across the entire code base.  Unlike other security processes, driving a secure coding standard is relatively easy to accomplish across an entire code base, regardless of the code’s age, by a focused security team.  That is not to say that it is easy without qualification – a large batch of spaghetti code will require a lot of work to untangle!  Such an effort can only be called “easy” when compared to, say, comprehensive identification and remediation of design flaws across legacy features.  Even so, improving code quality through the use of secure coding standards offers a unique combination of high impact, applicability to features, and ability to be carried out by a core team that makes it a sensible first step.



2.       The security team might notice that some sections of code have more standards violations or outright flaws than others.  This is an instance of vulnerability clustering, a concept that has been used to predict vulnerability rates and improve quality in the functional realm.  The evidence is anecdotal, but it stands to reason that portions of code that consistently violate secure coding standards are good places to start looking for other classes of security flaw.  These are security hotspots, and should be high on the prioritized list for further review.


Security testing may also be applied to legacy code, but initial activities should be considered on a case-by-case basis based on the expected return on investment.  Such testing ranges from using inexpensive off-the-shelf tools to exercise common interfaces to rather expensive custom testing and formal analysis.  It is worthwhile to begin with off-the-shelf tools, such as those that target file parsers or web applications, and tools created as part of your greater secure development efforts.  These can help identify easily-found flaws and suggest improvements to the coding standards.  Comprehensive security testing, on the other hand, is best tackled after the Legacy Security Push.


The Legacy Security Push


Coding standards and basic testing provide bang for the buck, but formal security processes seek to provide security assurance.  The challenge for legacy code is that it needs to play catch-up.  Security processes that occur early in the development cycle, such as requirements analysis, design review, and threat modeling, are particularly difficult to achieve years after the fact.  The main goal of the Legacy Security Push is to create the deliverables from these efforts, the most important of which are security requirements and a full risk analysis.


It may sound trivial, but security requirements are essential.  Not only do they define proper operation for the system in question, they also define assumptions that are suitable for relying systems.   It is very common to find security flaws in legacy systems that arise from well-intentioned but incorrect assumptions such as “I assume that the Foo authenticates server Bar when initiating a bank transfer.”  It stands to reason that Foo would do so for such an important activity, but this assumption must be validated.  It is very common for older features to have been deployed in and written for different environments where the security assumptions that are “obvious” today just didn’t apply at the time.


When reviewing legacy systems, the first step is to identify such requirements.  If the original architects, developers or managers are available, they can provide valuable insight at this stage.  More often than not this is not the case, and analysis must instead rely on what documentation is present and interaction between the software and its consumers.  The goal is the same as in requirements analysis during project inception, except that in this case one must turn the process on its head and reverse engineer requirements from system behavior.  At the conclusion of this effort, requirements can be theorized – “Foo must authenticate its server Bar before initiating a bank transfer.” 


Risk analysis can be performed once a plausible set of requirements have been identified.  Threat modeling is a more structured means of performing such an analysis, with the eventual goal of identifying means by which requirements can be violated by an attacker. 


As with requirements analysis, original developers would be a valuable resource to consult.  With or without such help, the first step is to identify how the software works.  In many cases, help is not available and performing this task requires a great deal of effort.  For features of moderate size, this author has spent upwards of a month reading code, using process profiling tools, and walking through the software with a debugger to identify program flow and security-sensitive functionality.


Once completed, actual system behavior should be documented and compared against the requirements theorized.   It might be that the requirements should be re-evaluated (New requirement:  Do not assume that Foo requires server authentication) or the system may need to be changed (New bug:   Foo does not verify the CN for Bar).  At the end, this information should be sufficient to support a comprehensive threat modeling exercise where security requirements, risks, and their mitigations can be documented.


Next Steps


Bringing a legacy feature up to par with its newer kin requires a relatively small number of items:  improved code quality, clear security requirements, and a thorough threat model.  As we have seen, performing even these tasks is quite the effort!  I am sure that it is little comfort to be reminded that accomplishing these tasks has simply laid the foundation, and that the true benefit is that the newly-reviewed legacy feature is able to participate fully in the security processes that remain: reviewing cross-component security requirements and assumptions, comprehensive testing, and incident planning, to name a few.


Unfortunately, there is no silver bullet in security assurance.  The soundness of the design and implementation of legacy software is just as important as in newer software, which is why any complete secure software development process will look backwards as well as forwards.  Feature by feature, from higher priority to lower, the overall security of the software improves as legacy code receives the full security treatment it deserves.


Did you find the silver bullet?  Might you think that defining security requirements is unnecessary?  Perhaps “It is old and has not been attacked yet.” is a valid security strategy!  Please comment below or email me directly at and share your thoughts.

About the Author
SDL Team

Trustworthy Computing, Microsoft