No, it’s not a post on why Adam should never volunteer to do a 12 part series on threat modeling, but rather, why inventing your own mitigations is hard, and why we suggest treading carefully if you need to go there.
Let me first explain what I mean by mitigations because apparently there’s some confusion. We have folks here at Microsoft who call things like the /GS compiler flags “mitigations.” When I talk about mitigations in a threat modeling context, I mean things that prevent an attack from working. For example, encryption mitigates information disclosure threats. If you use a strong cryptosystem, and you keep your keys secret, attackers can’t read your data.
Next, why would you need to go there? Often times you don’t. The problems that we face in building software projects are often similar to the problems that other software projects have faced, and we can learn from their successes and mistakes. There’s been a fair amount of good work done in patterns in software, and we even have design principles that we can use. Sometimes, the work being done is really new and different, and there’s a need to create innovative mitigations.
The security experts in the audience are cringing at the very idea. They know that this often ends in pain. I’d like to explain why.
Designing security mitigations is a skill. If I’m trying to design a database, my personal lack of database skills will be obvious: it will be slow, it might lose data, and we’ll discover there’s a problem very quickly. In contrast, if I’m designing a cryptosystem, I might not be able to break my own design. This is a common situation for amateur cryptographers. They build a system which they can’t break, but when an expert looks at it, it’s no more reliable than a fish in a blender.
Custom mitigations always result in a need for expert analysis, by someone who can draw on years of experience about what goes wrong. Ideally, that expert works for you, and looks at the idea long before it’s implemented… never mind shipping.