The Making of a Privacy Savvy Test Team

Rob Roberts here.

Software test engineers have a lot of things to consider when testing their products: performance, security, accessibility, reliability, usability, and a whole bunch of other “-ilities.” And now to address our increasingly interconnected world, they have yet another important area to evaluate – privacy.

It’s hard to imagine a testing topic more arcane than security but privacy may be just that. Just like software security, privacy isn’t taught in universities, there are few engineering-oriented books on the subject, and just what it means to do privacy testing is still being worked out. You can’t have privacy without security, but there are many other things to consider around privacy besides keeping data secure.  To help narrow what privacy behaviors testers should consider, we provide an approach for test teams as well as some basic tools they can use to understand the specific privacy scenarios relevant to their application.

The definitive source for all things privacy in the SDL is our internal privacy guidelines for developing products and services (the public version is posted here).   This document considers a wide range of privacy scenarios from storing customer information in the enterprise to privacy considerations around developing and publishing Web sites.  It includes quite a bit of detail – something our privacy Subject Matter Experts (SMEs) depend on to do thoughtful and thorough privacy reviews.  While some privacy testers may want to understand every nuance in this material, many would prefer to be given something that was more focused so they can concentrate on what they do best: breaking software.  To streamline the development of privacy “test” scenarios, we distilled the guidance and identified these three key steps for testers:

Step 1 Understand what data is collected and how it is used.

Test Engineers examine documentation (threat models, specs, design documents and so forth), meet with feature experts, and review source code to determine what data is being collected, stored and/or transferred by their application. This “data” includes all input, local storage (temporary or persistent) or remote storage that the application can access. Once they know what data is in play, they must classify it by type (anonymous, PII, sensitive PII), context (local storage vs. transfer off the system), and visibility (e.g., hidden metadata).  If necessary the tester can then validate these classifications with a privacy SME to determine what user information is potentially at risk, and how that risk is mitigated by the application.

When testers know what the data is and exactly how it flows through the system being tested, they identify the privacy impacting scenarios of their application.  Based on our privacy guidelines we created a “Privacy Bug Bar” to describe potential privacy bugs and their severity.

A tool we are prototyping for select internal groups is an interactive form for test engineers to identify privacy impacting behaviors and receive a summary of the applicable privacy scenarios (from the privacy guidelines) along with pre-defined bug bar information for those scenarios.  This tool is expected to save test engineers time in assessing privacy issues by giving them only the relevant privacy scenarios that affect their project.

Step 2 Review the application’s privacy statement.

During development privacy SMEs work with teams to create privacy statements for projects with privacy impacting behaviors.  An effective privacy statement informs users how the product, service, or web site behavior impacts their privacy, and what controls are available to change those behaviors.  Part of the tester’s responsibility is to ensure the published privacy statement accurately represents what their application does.

Step 3 Verify data use.

Armed with the knowledge of relevant privacy behaviors from steps 1 and 2, testers can now enhance their test cases by including privacy-aware scenarios throughout their functional testing.  If their testing discovers a violation of the guidelines then the predefined bug bar will help them rate its severity.

One of our biggest concerns in the privacy space is off-system communication, or what we call “phoning home.” An example of phoning home is software that goes online to check for newer versions or security updates.  Applications often rely on the ability to communicate across networks, but as data moves further away from the local system, risk of data exposure increases. When applications phone home, care needs to be taken to assure that this communication happens only with the customer’s permission, that they know what data is being sent, that the data is sent securely, and how it will be used upon reaching its destination.  Currently many test teams rely on very basic network monitoring to validate when their application is phoning home.  This is a manual and time consuming process that we are working to automate.

With this approach and our ongoing commitment to automation we are working to embed privacy into the testing process so that testers understand the relevant privacy scenarios for the software they are testing and enhance their ability discover problems before the software ships.

Privacy is important to users, and is a major component to building trustworthy software.  Testers play a key role in meeting the privacy demands that the SDL places on Microsoft products and services.

About the Author
SDL Team

Trustworthy Computing, Microsoft

Join the conversation

  1. asteingruebl


    Trying this for the second time – looks like it ate my first comment.

    I’m interested in your take on the slashdot article that came out this week about information leakage from some big websites.

    I wrote about these same sorts of problems on my blog and I postulate that its pretty much impossible to prevent this sort of information leakage even though it is counter to what you may want your privacy policy to be.

    Interested in how you classify these sorts of information leakage defects.  

  2. sdl

    WRT to the Slashdot article…  from a privacy perspective we encourage teams to use ambiguous error messaging on log-in failure.  We feel that knowledge that an account exists for a site usually has limited risk, so we work to mitigate this risk in the design and development phases for new versions vs. bugging and changing login experiences of existing products.

    Kim Cameron ( is currently investigating the issue of online identity and authorization and Microsoft is also looking to mitigate several of these issues through technologies such as CardSpace(

Comments are closed.