One Tool Does not Rule them All

Over the last couple of years, I’ve released information about various Microsoft product security bugs that required security bulletins to explain why the SDL missed the bugs (if indeed it did) and what defenses came into play (if any) and what lessons could be learned from the bug.

One thing that struck me is how most of the bugs I have discussed led us to improve our fuzzers, which is a more positive approach on “our fuzzers didn’t find the bugs”!

I talked a little about fuzz testing in a prior post, “Improve Security with ‘A Layer of Hurt’.” If you read much about security, you’ll see that fuzzing is a very effective security and reliability testing technique, but it is far from perfect.  In this short post I want to explain why it’s imperfect and that any security-related development and testing process must employ more than one method to find security bugs.

It’s important to realize that the SDL requires development teams to run fuzz tests on much of their application functionality, including any file parsing code, ActiveX controls, RPC interfaces and network parsers. But we have never relied on fuzzing alone to find security bugs. We use fuzzers as well as various static analysis and dynamic analysis tools.

No one type of security tool – fuzzers included – is the “One tool to rule them all.”

Let give you an example why fuzz testing is far from perfect with some sample highly buggy and contrived C++ code:


#define COPY_DATA 0
#define ZERO_DATA 1
#define QUERY_DATA 2

DWORD ProcessData(_In_bytecount_(cbBuf) BYTE *pBuf, size_t cbBuf) {

if (!pBuf || cbBuf<5)


// Blob format is:
//  1-byte ‘verb’
//  4-byte data
//  n-byte data stream

BYTE verb = pBuf[0];
DWORD len = pBuf[1] + (pBuf[2]<<8) + (pBuf[3]<<16) + (pBuf[4]<<24);
BYTE *data = &pBuf[5];

char dest[20];

if (verb == COPY_DATA) {
} else if (verb == ZERO_DATA) {

// Do stuff

} else if (verb == QUERY_DATA) {

// Do other stuff


// Other processing

return NO_ERROR;


Most compilers will compile this with no warning, and fuzzing this code might find the security bug in the call to sprintf but only if the fuzzed data triggers the vulnerable code path. Triggering the code path requires the verb to be zero, and the length of the data to be greater than 15 characters (twenty less the length of “Foo:” less the trailing NULL.) The word ‘length’ is important in this case, because sprintf continues copying string data until it hits a NULL in the source string, so if the fuzzer often inserts NULLs in data streams, regardless of the number of bytes in the data stream, then this code might not fail.

I want to stress the previous paragraph: fuzzing finds bugs only when the data causes execution of the vulnerable code path.

For example, this byte stream will cause the code above to fail:

0,12,0,0,0,’h’,’e’,’l’,’o’,’h’,’e’, ‘l’,’o’,’h’,’e’, ‘l’,’o’

But this will not because the first byte, 1, leads down a different code path.

1,12,0,0,0,’h’,’e’,’l’,’o’,’h’,’e’, ‘l’,’o’,’h’,’e’, ‘l’,’o’

The data below will also not cause the code to fail, even though the data at the end of the buffer is large, because there’s a NULL in the second byte.


Today the code sample above is flagged by the SDL process because sprintf is a banned function and should be replaced by a safer function such as StringCchPrintf or sprintf_s which should mean fuzzing has no bug to find. But it is possible to  miscalculate the buffer size arguments that the safer functions require, which means fuzzing is always valuable, especially if your static analysis tools don’t find the bugs either.

The lesson from this is there is no single correct way to find code bugs, you have to perform fuzz testing as well as static analysis and employ general good code hygiene if you want to raise the security of your software. In fact, it’s possible that some developers might believe there is a single tool to solve their security ills and rely on it so much that they miss serious security vulnerabilities.

About the Author
Michael Howard

Principal Security Program Manager

Michael Howard is a principal security program manager on the Trustworthy Computing (TwC) Security team at Microsoft, where he is responsible for managing secure design, programming, and testing techniques across the company. Michael is an architect of the Security Development Read more »