Security is Mathematics

In a recent editorial in Wired News, Bruce Schneier commented on the twisted mind of security professionals; that is, the way that we look at the world, always questioning hidden assumptions -- like the assumption that someone who buys an ant farm will mail in the included card asking to have a tube of ants delivered to his own address, rather than someone else's address. Schneier suggests that this "particular way of looking at the world" is very difficult to train -- far more difficult than the domain expertise relevant to security. I respectfully differ: In my opinion, this mindset is not particular to security professionals; and universities have been successfully training people to hold this mindset for centuries.

In the fall of 1995, in my second year as an undergraduate student at Simon Fraser University, I took a course numbered 'Math 242', with the title "Introduction to Analysis". This was (and still is) a required course for mathematics undergraduates, and for very good reason; it is often described as "the course which decides if you're going to get a degree in Mathematics", and is the first undergraduate course which takes mathematical rigor seriously. Remember in first-year calculus where the topics of sequences, series, convergence, continuity of functions, and limits were glossed over? This is the course where you learn to prove everything you thought you already knew.

In the semester I took this course, the average grade on the first mid-term examination was 29%. Three students (myself included), out of a class of about 40, scored higher than 50%. I don't know exact numbers for other semesters, but my understanding is that grade distribution wasn't particularly unusual.

Why was the average grade so low? Because the entire mid-term examination consisted of writing proofs; and a proof isn't correct unless it considers all possible cases. Forgot to prove that a limit exists before computing what it must be? Your proof is wrong. Assumed that your continuous function was uniformly continuous? Your proof is wrong. Jumped from having proven that a function is continuous to assuming that it is differentiable? Your proof is wrong. Made even the slightest unwarranted assumption, even if what you ended up thinking that you had proved was true? Sorry, your proof is wrong.

This is what Schneier calls the "security mindset" -- and all mathematicians have it. In the first chapter of my doctoral thesis, I devoted a page to proving a lemma concerning the distribution of primes (namely, that between x and x * (1 + 2 / log(x)) there are at least x / log(x)^2 primes, i.e., at least half of the "expected" number). I didn't do this merely because I liked the notion of citing a paper concerning the distribution of zeroes of the Riemann zeta function in a thesis about string matching (although I admit that I found the juxtaposition appealing); rather, I did it because I couldn't prove an error bound on my randomized algorithm without this lemma. Most computer scientists would have waved their hands and made the common assumption that prime numbers "behave randomly"; but with my mathematical training, I wanted a proof which didn't rely on extraneous assumptions.

Knuth is famous for the remark "Beware of bugs in the above code; I have only proved it correct, not tried it", and the implicit statement that a proof-of-correctness is not adequate to ensure that code will operate correctly is one I absolutely agree with; however, it is important to consider the nature of bugs which evade the eye of a proof-writer. These bugs -- and, I posit, the potential bugs which Knuth was warning against -- tend to be errors in transmitting ideas from brain to keyboard: Missing a semicolon or parenthesis, for example, and thereby rendering the code uncompilable; or mixing up two variable names, and thereby causing the code to never function as specified. These bugs are easily found by quite minimal testing; so while neither testing nor proving is particularly effective alone, in combination they are highly effective.

More importantly than this, however, is that the sort of edge cases which mathematicians are trained to think about in writing a proof are exactly the sort which cause most security issues. Very few security problems "in the wild" are the result of bugs which are tripped over all the time -- such bugs don't survive long enough to cause problems for security. Rather, security issues arise when an unanticipated rare occurrence -- say, an exceptionally large input, a file which is corrupted, or a network connection which is closed at exactly the wrong time -- takes place. For this reason, when I write security-critical code I generally construct a proof as I go along; I don't go to the extent of writing down said proof, but by thinking about how I would prove that the code is correct, I force myself to think about all of the edge cases which might be potentially hazardous.

Schneier is right that security requires a strange mindset; and he's right that computer science departments aren't good places to teach this mindset. But he's wrong in thinking that it can't be taught: If you want someone to understand security, just send him to a university mathematics department for four years.

Posted at 2008-03-21 12:10 | Permanent link | Comments

Recent posts

Monthly Archives

Yearly Archives


RSS