Rob Hansen’s Crypto faq

This page uses MathML for its (brief) mathematical content. At present, Apple’s Safari browser does not appear to have good MathML support. Windows and Mac OS X users may have better luck with Firefox. Mac OS X users may also want to try Camino.

The page begins beneath the mandatory legal disclaimers. I apologize for their length, but we live in a lawsuit–happy culture.

Version: 10.2009

Copyright 20032009, Robert J. Hansen.

You are invited to make use of this document under terms of the Creative Commons license linked to by the Creative Commons image to the right. There are two additional rights I am willing to grant you, subject to your agreeing to my terms. You may:

  1. Translate this document into electronic formats other than xhtml 1.1, as long as the following conditions are met:
    1. The new document format is defined by a freely and publicly available standard which has been approved by a reputable standards body
    2. The translated document conforms strictly to the standard you have chosen
  2. Excerpt up to 100 words from this faq, as long as the following conditions are met:
    1. The excerpt is credited appropriately
    2. A link back to this page is provided

no other alterations are permitted.

Many cryptographic algorithms are trademarked. My use of such marks is for educational use only, and is not meant as a challenge to those marks.

While I have taken reasonable steps to ensure the accuracy of my statements, I am human and have been known to make errors. If you find one, please email me. I disclaim any liability for errors.

The citation of other authors should not be construed as my endorsement of their entire corpus. I encourage you to read anything I cite, but make sure to read critically. You’re the gatekeeper of your mind: be liberal in what you allow to visit, but be extraordinarily selective about what you allow to take up residence.

  1. Who’s writing this?
  2. Can intelligence agencies break modern, peer–reviewed crypto?
    1. What’s p=np? Why is it important? Has the nsa proved it?
    2. But didn’t the nsa prove that primes is in p?
    3. What’s thermodynamics? Can the nsa get around that?
    4. What are the undecidable problems? Why isn’t there any research into using those as provably–secure cryptographic systems?
    5. What are the quantum limits of computation? Will it really shred rsa like people claim?
    6. So what's the upshot?
  3. What happened to sha–1?
  4. What happened to dsa?
  5. Why don’t you like tiger192?
  6. Why don’t you like Camellia?
  7. What would you like to see added to OpenPGP?
  8. What’s the best algorithm?
  9. What size of key should I use?
  10. Is a 2048–bit key equal to a 128–bit key?
  11. But I use aes256! Don’t I need a 16384–bit key to protect it?
  12. What public–key algorithm should I use? In what length?
  13. What symmetric–key algorithm should I use? In what length?
  14. What software do you use?
  15. Why don’t you use GnuPG 2.x?
  16. Why don’t you recommend ckt builds?
  17. Why shouldn’t I use idea?
  18. You know spooks, right?
  19. What are they like?
  20. Credits

Who’s writing this?

My name is Rob Hansen. I share a name with an Alaskan serial killer, a traitor to the country, a professional basketball player (we even graduated from the same university — oh, how I look forward to those alumni get–togethers!), and another security geek.

For these reasons, I always go by Robert J. Hansen professionally. In private, I’m just Rob.

I’m a software engineer living in the Washington, D.C. metro area, working for one of the area’s many government contractors. Prior to that I was a doctoral candidate in computer science at the University of Iowa, where my Ph.D. focus was on secure software engineering. I’m a thesis away from my doctoral degree, which is very common in the field: for working software engineers, a Ph.D. hurts your career more than it helps. I hold a Master’s degree in computer science, with a focus in secure software engineering. I have taught computer science, mathematics and software engineering at the undergraduate and graduate levels, and I like to volunteer with elementary schools to bring interesting mathematics to an often stultifying elementary–school curriculum.

I’ve been involved with crypto ever since 1992, when I was a senior in high school playing around with pgp 2.6. The next year while a freshman at the University of Houston, my roommate was an nsa cryptanalyst. That was really when my love affair with crypto began.

I’m not going to talk about who I’ve worked for or what my professional accomplishments have been. If you want to hire me, then I’ll provide references and a cv. Otherwise, people should evaluate what I say on the evidence I present and not on what qualifications I claim to possess. Out here on the net, anyone can claim to be anything.

Can intelligence agencies break modern, peer–reviewed crypto?

Beats me. Have you tried asking them?

In all seriousness, this question is very difficult to answer. There are no clear–cut answers. This question is very difficult just to talk about. Talking accurately about it requires an astonishing amount of math!

I’m going to try to make this readable, but there’ll be a price to be paid. I’ll lose much of the precision and clarity which mathematicians and cryptographers have worked so hard to establish. Mathematicians and cryptographers will wince at how much I’m glossing over, handwaving, or outright not talking about.

If you really want a good, solid answer, you’ll need an undergraduate math degree at the minimum. If you’re comfortable with the thornier issues going unmentioned, though, read on.

Or, as Frijtof Capra wrote in The Tao of Physics,

“Although I have suppressed all the mathematics and simplified the analysis considerably, the following discussion may appear to be rather dry and technical. It should perhaps be taken as ‘yogic’ exercise which — like many exercises in the spiritual training of the Eastern traditions — may not be much fun, but may lead to a profound and beautiful insight into the essential nature of things.”

Some problems are really, really hard.

Intelligence agencies have a lot of experience at breaking ciphers, yes. However, there’s been some evidence lately that the civilian cryptanalytic world has largely caught up. For this reason, many respected cryptographers believe the nsa is not capable of breaking modern, peer–reviewed ciphers.

The reasoning goes something like this. According to the best minds in the civilian cryptanalytic world, we’re decades — for some ciphers, even more — away from being able to break these ciphers at will. As an example, consider rsa. If the only way to break it is to figure out how to factor very large composite numbers, then we might very well be safe. We’ve tried to figure out how to factor large composites ever since Euclid’s day, and we’ve made embarassingly little progress against it over the last few thousand years. It might take us a few hundred more years to solve it.

The central flaw with this reasoning is that the progress of mathematical development is never that well–ordered.

What happens, for instance, if some smart Jane discovers that you can break rsa without needing to factor large semiprimes? Then we could end–run around the entire factorization problem.

And that’s not even getting into how difficult it is to make projections about what we will or will not discover to be mathematical truth. For example, in January of 1994 I was taught that Fermat’s Last Theorem might be forever beyond the reach of mathematics. Within a year, Andrew Wiles proved it. Just because a bunch of mathematicians think something is a hard problem is absolutely no guarantee some smart Johnny won’t come along tomorrow and solve it.

So: if you want my own opinion on it, here it is. I don’t think the nsa is more than a few years ahead of the civilian cryptanalytic world. However, given how much can change in a few years, I’m not making any predictions on precisely what the nsa has or hasn’t discovered with their small head start.

Stay tuned to cnn. If rsa breaks, trust me: they’ll cover it.

What’s p=np? Why is it important? Has the nsa proved it?

  1. The most important question in cryptography
  2. The most important question in computer science
  3. One of the most important questions in mathematics

Doctors categorize diseases by what they affect. Biologists categorize life forms by what they’re similar to. Computer scientists categorize problems by how long it takes to find and/or verify an answer.

An answer which can be calculated in a reasonable amount of time is said to belong to “complexity class p”. p–problems have one thing in common: they’re boring. Really, really boring. We know how to solve them, and we know that (speaking generally) they don’t present any real challenge — and so we’d rather spend our time working on harder problems.

The next harder class of problems are those which can be verified in a reasonable amount of time. If I give you the answer to the problem, you should be able to confirm that the answer is correct. These problems are called np problems.

Now, a little bit of thinking should tell you that every p problem is also an np problem. After all, if we can solve a problem quickly, we can also verify a solution quickly — to verify it, just solve the problem yourself. The set of all p problems is contained within (or, as a mathematician would say, “is a subset of”) the set of all np problems.

So if we know p is contained within np, the next interesting question becomes — are there any problems in np that do not exist in p?

It might help you understand this if you get out a sheet of paper. Draw a circle, and mark it as p. That circle represents all the problems we can solve quickly.

Draw another circle around the first circle. Mark this one as np. This circle represents all the problems we can verify quickly.

Now, imagine that the borders of the two circles are getting closer and closer together. At some point, the two borders will touch. p will still be contained within np, but there will be absolutely nothing in np that is not contained within p.

If this were to happen, we would say the two sets are equal. If this were to happen, p would be equal to np. But if this doesn't happen, then p would be smaller than np.

My brain hurts!

Do what we in the math and computer science departments do when our brains start to hurt. Drink. Once you’ve recovered from the hangover, come back and continue reading.

Now that you’ve seen the difference between p and np problems, let’s recall again what each means.

  1. p problems can be solved quickly.
  2. np problems can be verified quickly, once you’re given a solution.

Encryption is definitely the former. We can choose a cryptographic key and run the algorithm and it finishes very, very quickly. Decryption is definitely the latter. If you tell me what key you used, I can very quickly run the algorithm and tell if your answer is right or wrong… but I can’t easily find out what the proper key is, without your giving it to me.

So what would happen, then, if the two classes of difficulty were equal to each other?

One interpretation would be “well, all those easy problems that we’ve been solving for thousands of years now, they’re really quite difficult ones and we were fooling ourselves.” This is a pretty outrageous claim to make — essentially it would mean “nobody can ever balance their checkbook” — so I’ll ignore it here.

The other interpretation would be “all those very thorny problems we’ve been building our cryptographic systems on? They’re actually very easy problems to solve.” While this, too, is an outrageous claim, it has a virtue the first claim does not: this claim doesn’t insult our intelligence.

For that reason, we conclude that if p is equal to np, then pretty much all of crypto collapses.

Hopefully, that explains why all of crypto revolves around the p=np question!

Not yet. Does the nsa have the answer?

Beats me. Have you asked them?

The nsa has a lot of extraordinarily talented mathematicians, yes, but even their resources pale in comparison to academia. There aren’t all that many people working on crypto in academia, but there are literally tens of thousands of computer science graduate students in this country who would kill for the chance to conclusively prove p and np either equal or unequal.

You can posit the nsa having ten thousand of the nation’s best mathematicians (even if that means that over a third of their employees have the letters Ph.D. after their name). It will still be utterly insignificant compared to the amount of brainpower academia has thrown against the problem.

This problem has been unsolved for almost two hundred years. Countess Ada Lovelace was the first one to explore this problem, albeit in a stumbling way. She, the founder of computer science, made no headway against it. Turing, von Neumann, Strachey, Wilkes, all the way through Abelson, Sipser and Fleck have made no headway against it.

It has turned brilliant young graduate students into burned–out alcoholic wrecks.

It may very well remain unsolved for all time. It may belong to a class of questions known as the undecidables.

Or it may be solved tomorrow. You never know. All that you can be fairly certain of is this — if it ever does get solved, you’ll know it, because the discoverer will be trumpeting it from every rooftop.

But didn’t the nsa prove that primes is in p?


The question of whether a given number is prime or composite is called the primes problem. It was solved a few years ago by some extraordinarily bright young mathematicians in India who took an unorthodox and entirely original approach to the problem. It’s brilliant mathematical work, and they certainly qualify as smart Johnnies, all of them.

However, this wasn’t work done by the nsa, and it doesn’t help you factor. All that it tells you is whether a number is prime or composite, and does so with one hundred percent accuracy. That’s all; nothing more. It gives you no information about what factors go into it if it’s composite.

What’s thermodynamics? Can the nsa get around that?

Thermodynamics is a branch of physics that concerns itself with heat. Or maybe it concerns itself with the ultimate fate of the universe. Or maybe it concerns itself with how much energy it takes to get stuff done. All answers are accurate.

Entropy is a measure of the statistical disorder of a system. In physics, disorder manifests itself as heat. Something that’s hot is in a much, much more disordered state than something that’s cold. In computer science, disorder manifests itself as…

… heat.

This is something that stunned Claude Shannon when he discovered it. He was trying to figure out a way to measure the information content of telephone lines, and the equations he kept on discovering looked very familiar. Shannon eventually called it “entropy”, just because the equations were the same as the physics equations for entropy. Shannon’s discovery was that information and “entropy” were opposite sides of the same coin: an increase in one necessarily involved a decrease in the other.

Whether these two entropies represent the same thing is a subject of immense debate within computer science. What nobody disagrees on, though, are the real–world implications: that every single time you discard information, you have to pay a cost in heat. Period. End of sentence.

This number is very, very small, but it’s not zero. Every single time you lose a bit of information, you pay k T ln 2 joules of energy. That’s how much energy has to leak from the system with every single bit of information that’s lost.

This is an incredibly small amount — about 10 -23 joules per bitflip. By comparison, a car battery puts out about 10 26 times that amount. That’s a huge difference, just mind–blowingly huge. Most people think we can just ignore the Landauer Bound… but when it comes to crypto, that’s just folly.

Assume a 128–bit cipher. Each time you want to try a new key, you’re going to have to discard about 64 bits.

(You could try numbers in order, thus discarding only one bit per; but then your brute–forcer could be countered by your opponent choosing a key close to the end of the keyspace. Just like good crypto assumes the attacker knows everything about the defender, good cryptanalysis assumes the defender knows everything about the attacker.)

64 is close enough to 100 for our purposes — we want some quick back–of–the–napkin estimates, nothing more — so let’s write down: “each key = 10 2 bits of information erased.”

Now, to break a 128–bit cipher by brute force requires, on average, 2 127 attempts. That’s close to 10 38 , so let’s write that down, too.

Multiply the two numbers together to get the total number of bits of information you’ve discarded. To multiply together two numbers written in scientific notation, you add together their exponents. 10 2 × 10 38 = 10 40 .

Finally, we have to multiply our total number of discarded bits by the price we have to pay for each of them. Just like before, multiplying scientific–notation numbers is addition… except this time, one of the numbers is negative, so we can think of it like subtraction. 10 40 × 10 -23 = 10 17 .

That gives us an absolute lower bound on the amount of energy we would have to lose while brute–forcing a 128–bit key, 10 17 joules… but that’s just a number. It doesn’t mean much to us, does it? So let’s put it in terms we can understand.

A one–megaton nuclear weapon — the citykillers which terrified a generation during the Cold War, the devices so terrible they were correctly called “portable concentration camps” — they work by releasing a lot of heat. A Bomb, a citykiller, something that can turn an entire city into radioactive ash, releases 10 15 joules of heat.

If multiplying two scientific–notation numbers is addition, then dividing them is subtraction. 10 17 ÷ 10 15 = 10 2 . The first number is how much heat we’re creating. The second is how much heat is liberated by a citykiller. Our answer? We’d need 10 2 citykillers.

In other words, it would take one hundred strategic nuclear warheads just to power the computer to break a 128–bit cipher by brute force.

Let me say it clear and cold: anyone who says the nsa has the computing power to brute–force a 128–bit cipher is living in a fantasy.

The math just doesn’t work.


Thermodynamics puts limits on how much energy we have to spend. Quantum mechanics puts limits on how fast we can operate. No computer — conventional or quantum–mechanical — can set or clear a bit in less time than h 4 E seconds, where h is a fundamental constant of the universe and E is how much energy we’re devoting to the bitflip. The more energy we put in, the faster it can go.

We want to use the absolute minimum energy required, because we’ve seen that it pays to be economical. If we want our computer to get the answers twice as fast, we’d have to detonate two hundred citykillers instead of just one hundred. At some point we’d make the earth totally uninhabitable, so… let’s be economical for now.

Let’s also assume we have a computer that can operate on a trillion bits at once. The quantum–mechanical limit on how fast a computer can operate can be modified by working on multiple bits at once, after all.

We know how many bits have to be manipulated: around 10 40 . We know that a trillion is 10 12 . So let’s divide it out: if in a single pass our supercomputer can process a trillion bits, then we divide it out and get 10 40 ÷ 10 12 = 10 28 . We have to do that many passes through our computer in order to try every key. So, knowing how many passes we have to make, and how much time each pass must take, we do another simple multiplication and get 10 28 × 10 -12 = 10 16 seconds… or about a billion years.

Let’s recap: using computers which operate at the peak efficiency allowed by the laws of physics, it would take about one hundred nuclear warheads and a billion years to brute–force a 128–bit cipher.

I have faith that the nsa has at least one physicist on their payroll who knows thermodynamics and the quantum–mechanical limits of computation. I have faith that they’re not going to waste their time brute–forcing 128–bit ciphers when they know full well it’s a fool’s errand.

Honestly, they’re much smarter than to even try.

What are the undecidable problems? Why isn’t there any research into using those as provably–secure cryptographic systems?

Now that we’ve relaxed our brains a little bit with some strong whiskey and simple scientific–notation math, let’s dive back into computational theory. Remember all that talk about p and np?

Well, there are complexity classes even beyond those two. Lots more of them, in fact. It would take me too long to list them all, but the upshot of it is the great–granddaddy of all complexity classes is called undecidable. It’s called that for a simple reason: any problem in this class simply cannot be decisively answered.

Do answers exist to them? Sure! No! Umm — maybe?

Basically, it depends on what you mean by “answers”. If you mean “is it solvable”, the answer is simple: absolutely not. But that’s not the same thing as saying we can’t learn some very interesting things by studying them. If it’s insight you want, the undecidables offer it in abundance. If it’s answers you want, you need to look elsewhere.

This is what makes them such appealing ideas for cryptographic algorithms!

In fact, the odds are good you already know of one undecidable cryptosystem. It’s called the one–time pad.

Think about it. If I give you a message encrypted with a one–time pad and then give you the key which I claim decrypts it, do you have any way to prove that I’m right? Do you have any way to prove that I’m wrong? All keys are equally likely, all decryptions are equally likely, and there is absolutely no mathematical way to tell when you get the right key as opposed to any of the umpteen billions of wrong ones.

This is an undecidable cryptosystem.

It is the only undecidable cryptosystem that exists.

Seriously. One of the weird–but–true things in computer science is there’s really only one problem in undecidable, and trust me, is it ever a doozy — but you can phrase that problem in literally billions of different ways. All the different ways of phrasing it are equivalent to each other. The upshot of it is that all undecidable cryptosystems are equivalent to each other, since they’re all built off the same problem… so when we say the one–time pad is the only undecidable cryptosystem out there, we mean it.

Many other people on the net have given good, lucid reasons why the one–time pad is a terrible cryptosystem. (To summarize them: things that are perfect in theory are usually perfectly awful in practice.) I won’t repeat their explanations here. Hopefully, you can understand from this that just kicking up into a higher difficulty class doesn’t necessarily mean you’re gaining anything.

What about the lesser classes?

What, you mean like pspace, exptime and the lot? Well, then we run into some major problems. Namely, we have to give up the ability to decrypt. It makes crypto rooted in these complexity classes… well, mostly useless.

Do you remember how we said that complexity class np consisted of any problem which could be verified in a reasonable length of time? Well, here’s your encrypted message, and here’s your key which will decrypt it. But by decrypting it and recovering the original message, you’ve been able to verify that the key you were given was a solution to the problem. So our uber–hard exptime cryptosystem… is really in complexity class np.

The only way you can make a cryptosystem based on a complexity class past np is to give up the ability to tell whether you’ve successfully decrypted the message.

Hmm… sounds a lot like the one–time pad, doesn’t it?

There are no useful cryptosystems in the no–man’s land between np and undecidable. Once you leave np, your next stop is the one–time pad.

What are the quantum limits of computation? Will it really shred rsa like people claim?

rsa is in trouble from quantum computers only if very, very large quantum computers become possible. Theoretically there’s nothing holding them back, but as far as the engineering goes we might be decades away still.

Where conventional computers work on bits, quantum computers work on quantum bits. We usually shorten it to “qubits”, and pronounce it much like the 1980s video game Q–bert. We have an algorithm for quantum computers which can factor large composites without breaking a sweat. The only problem is, it requires twice as many qubits as there are bits in the composite.

Right now our largest quantum computer has fifteen qubits. This is enough to factor a seven–bit rsa modulus. If you’re using a key that small, you might be in trouble. Given most people use keys with moduli thousands of bits long, I don’t think we need to start running for the exits just yet.

The jury is still out on whether quantum computers can be used against the Discrete Logarithm Problem. It’s believed that they can be, but to the best of my knowledge no–one has made a real–world demonstration.

However, in a demonstration of the perversity of the quantum world, quantum computers fare poorly against symmetric–key systems. We have to use a different algorithm, and while it’s still an enormous improvement over conventional computing, enormous in this case simply isn’t enormous enough. We can slash the effective keyspace by a ridiculous factor: we can turn a 256–bit key into a 128–bit key, a 128–bit key into a 64–bit key, and so on and so on.

So what's the upshot?

  1. The nsa is probably not that far ahead of us in cryptanalysis.
  2. They’re not able to brute–force modern ciphers.
  3. Everything has limits. Even math and quantum mechanics.
  4. Modern crypto is designed with those limits in mind.

… Please don’t fool yourself into thinking you have nothing to worry about. Keep in mind that so far I’ve talked only about cryptanalysis and brute force. There are plenty of ways to skin a cat, and there are plenty of ways to compromise a cryptosystem. All that I want to convey here is how phenomenally unlikely it is that anyone is using classical cryptanalysis or brute force on your traffic.

If, from this, you conclude that you’re safe… well, you’re almost definitely not. It doesn’t matter how smart you are: there’s someone out there who’s smarter. It doesn’t matter how careful you are; you’re human and you’ll slip sooner or later, and someone who’s prepared can take full advantage of that.

What happened to sha–1?

Short version? It fell down, went boom.

Longer version? Some very bright people at Shengdong University in China discovered an attack against it. The attack is not yet practical, except maybe for extraordinarily large and well–funded groups who don’t mind throwing huge truckloads of money away just to forge a single message.

However, it happens without fail that as soon as someone announces an academic break against an algorithm, someone in the crypto community will say “you’re scaremongering! Until I see a practical attack, I’m not going to change!” (See, e.g., the section on ckt builds.)

If you’re thinking this, I’d like to propose a thought experiment. It involves you watching me load a magazine of ammunition into a pistol. You watch me chamber a round, flip the safety off, and fire a shot through your favorite vase. I then put the gun to your head. Do you feel threatened? Why, or why not?

“Well, of course I do!” you might answer. “I don’t know if you’re going to pull the trigger. I don’t know if the gun is going to go off. I don’t know what you want. I don’t know why you’re randomly shooting my favorite ceramics. Yes, I feel threatened!”

The same argument applies here. We’ve seen, from Shengdong University and other researchers, ample evidence that the gun is loaded and can fire. It is a matter of time until someone comes along and decides to pull the trigger for–real. It is best if we all stop using sha–1 right now, before the Shengdong University attack has enough time to turn into something practical.

In 1997, Hans Dobbertin proved the popular hash algorithm md5 was flawed. People continued using it regardless, on the grounds that attacks against it weren’t “practical”. Today, we can create collisions in md5 in under one second… and we have tons of embedded systems in security–critical infrastructure, installed after 1997, which still use md5.

How does that make you feel about the people who say that no attack is meaningful until a practical attack is demonstrated?

Please stop using sha–1 right now.

What happened to dsa?

Depends. Which dsa do you mean?

First, the short answer: nothing, really. The longer answer is… well… longer.

dsa, like any other algorithm, is pretty much useless by itself. It’s always used as part of a larger system. Sometimes, the choices made by that larger system can compromise the security of an algorithm.

ietf rfc 4880 very slightly compromised the security of dsa signatures by not putting in a hash function firewall. A hash function firewall is a mechanism which prevents someone from being able to lie about what hash function they used to create the signature.

If there’s a single weak hash function anywhere in the system, an attacker could get a good signature from you, and then construct a new message which — when using the weak hash function — would hash out to the same value. They then present your good signature on a forged message, and lie about what algorithm you used to sign it with.

For that reason, some reputable people in the OpenPGP community are skeptical of the long–term viability of dsa signatures, given the recent attacks against sha–1.

However, I am generally of the opinion these concerns are overstated. If you’re electronically signing a thirty–year mortgage, then yes, I’d recommend not using dsa, but those sorts of instances are few and far between. Regular users can feel confident in the security of dsa.

Why don’t you like tiger192?

When people ask me why I hate tiger192, they’re usually referring to my emphatic recommendation that people not use it in OpenPGP. So let me say it very clearly here: just because I recommend people not use it in OpenPGP, it does not follow that I do not like tiger192.

I like tiger192. Ross Anderson is a very competent cryptographer and his hash algorithm is plenty worthy of study. It’s a shame that it hasn’t caught on more broadly.

However, “I like it!” is not sufficient reason to include an algorithm in an ietf standard. tiger192 is not part of the OpenPGP suite of algorithms, it has never been part of the OpenPGP suite of algorithms, its only normative reference comes from a brief mention in ietf rfc 3156.

There’s an old software engineering maxim: “nobody’s talented enough to put a bug in a line of code they don’t write.” We introduce bugs at the same time we introduce lines of code. When we reduce lines of code, we almost always reduce bugs.

If tiger192 brought some new capability to the table, something the other hash algorithms couldn’t provide, then I’d be one of the first people demanding its inclusion in the standard. It doesn’t, though.

tiger192’s base of users is extraordinarily small. I would estimate it at under half a percent. This half–percent of users makes signatures that 99.5% of OpenPGP users cannot verify. This is an interoperability nightmare.

So — my reasons for emphatically recommending against tiger192 are:

  1. It adds complexity to software
  2. Complexity is the enemy of security
  3. It harms interoperability
  4. It doesn't bring any new capabilities to OpenPGP

When people argue in favor of tiger192, they typically argue in its favor as:

  1. But I like it!

Great, I like it, too. But liking it just isn’t enough.

Why don’t you like Camellia?

I encourage people to not use Camellia, but that doesn’t mean I think it’s a bad algorithm. I only think it’s a poorly supported algorithm. Although Camellia is part of the OpenPGP standard, very few OpenPGP implementations include it. The overwhelming majority of them do not. Encrypting your data with algorithms your recipients can’t use strikes me as unwise.

If you have a real need for Camellia — if you’re working for an agency that requires Camellia as an encryption algorithm, for instance — then use Camellia with confidence.

What would you like to see added to OpenPGP?

whirlpool. Unlike tiger192, it brings new capabilities to the table.

Without exception, every hash in the OpenPGP suite right now is built around the Merkle–Damgård construction. Given how many algorithms in this family have been successfully attacked, it’s worth wondering whether there’s some deep, fundamental flaw in the basic ideas behind the Merkle–Damgård construction.

whirlpool is a 512–bit hash algorithm built around a Miyaguchi–Preneel construction. I think it would be a good idea for OpenPGP to end the Merkle–Damgård monoculture and get some variety in, just to offset the (small) risk of some catastrophic future attack against Merkle–Damgård.

What’s the best algorithm?

I hope by this time you’ve come to realize that cryptography is an intensely mathematical art and that it’s easy to get lost in a maze of twisty little theorems, all alike. If you think you’ve done well so far, be careful patting yourself on the back. Earlier on, when I said “[m]athematicians and cryptographers will wince at how much I’m glossing over, handwaving, or outright not talking about,” well — I meant it.

First, let’s start with the word “best.” Best by whose metric? Best against linear cryptanalysis, best against differential cryptanalysis, best against impossible–differential cryptanalysis? Best in terms of fewest clock cycles per encrypted block? Best in terms of lowest memory footprint, fastest s– and p–box setup and teardown? Best in terms of overdesign? Best in terms of mathematical elegance? Best in terms of…?

As you can tell, there are many, many things you could be referring to when you say “best.” Using words like “best” to describe cryptographic algorithms leads me to think you don’t know what you mean.

As such, this question really has no useful answer. You have not defined what it means to be “best,” or even how “bestness” should be quantified among the many competing criteria.

What size of key should I use?

The only real recommendation I have is to stick with the defaults. I don’t know your needs. People who are just trying to protect their email against random eyes have different needs from people who are guarding nuclear weapon launch codes.

My standard advice is “unless you know what you’re doing and why, stick with the defaults.” It is very likely that the people who wrote your crypto application already chose very sensible defaults.

That said, if you want to know particulars, here are my opinions with respect to OpenPGP keys:

Most people will be very well–served with 2048–bit keys. Either dsa2 or rsa will work just fine. There is no real reason to believe one algorithm is superior to the other.

Beware of falling prey to the kind of shortsighted thinking that says only key size matters. It is far better to use a reasonable keysize and keep your healthy skepticism than to become blinded by the hype of a 4096–bit key.

Is a 2048–bit key equal to a 128–bit key?

Depends on who’s providing the numbers. All of these “equivalency” charts you see on the internet are really speculation. Worse than that, they’re speculation tied to a particular point in time, anticipating only that era’s technology and mathematical developments. If you look at Ron Rivest’s original estimates about rsa key lengths from the 1970s, they seem hopelessly naïve. When you look at today’s estimates, please keep in mind that these, too, are just as hopelessly naïve… we just don’t yet know precisely how naïve.

That said: according to the latest nist report, a 1024–bit key is roughly equal to an 80–bit symmetric key; a 2048–bit key is roughly equal to a 112–bit symmetric key; a 3072–bit key is roughly equal to a 128–bit symmetric key; and a 4096–bit key is roughly equal to a 168–bit key.

But I use aes256! Don’t I need a 16384–bit key to protect it?


Some people advocate that the symmetric and asymmetric components of a cryptosystem be balanced. This is balderdash. While it’s true that any attacker will go after the weaker of the two systems, it simply doesn’t follow that the two systems should be equal in strength. All that follows is you should make sure the weaker of your two systems is still stronger than you need.

Yes, it really is that simple.

What public–key algorithm should I use? In what length?

I recommend using your cryptosystem’s defaults for encryption algorithms and for key lengths. Unless you know what you’re doing and why, don’t mess with it.

What symmetric–key algorithm should I use? In what length?

I recommend using your cryptosystem’s defaults for encryption algorithms and for key lengths. Unless you know what you’re doing and why, don’t mess with it.

What software do you use?

I use Mac OS X, Windows XP, Windows Vista, FreeBSD and Ubuntu on a fairly regular basis. I use the same software on all of them. Thunderbird, Enigmail and GnuPG.

Why don’t you use GnuPG 2.x?

GnuPG 2.x is, if you ask me, badly misnamed. It adds a lot of capabilities, sure, but the capabilities it adds are ones that don’t matter to the majority of GnuPG users. Maybe “GnuPG 1.4 plus some other things you won’t use” — but not “GnuPG 2.x.”

GnuPG 2.x does not offer any new capabilities to OpenPGP users. The new features are all related to supporting another email encryption standard called s/mime.

Thunderbird has excellent s/mime support built in. Therefore, I don’t need any of GnuPG 2.x’s new features. Why should I switch?

There’s an old adage in software engineering: “nobody is talented enough to put a bug in a line of code they don’t write.” Or, put another way, every unnecessary line of code is a bug vector. If I don’t need GnuPG 2.x’s features, why should I open myself up to the bug vectors?

Why don’t you recommend ckt builds?

I strongly advise against Imad Faiad’s ckt builds for both legal and engineering reasons. From a legal standpoint, it’s a clear violation of copyright law to download a ckt build or to make one available for download. It’s a clear violation of the Network Associates, Inc., 6.5.8 license agreement to apply the ckt patches to 6.5.8 source you already possess. Given we live in a lawsuit–happy culture, I recommend people not download the ckt builds.

From a software engineering standpoint, several people of repute within the pgp community do not have much respect for the software engineering of the ckt builds. Nor do I, given that a couple of well–publicized showstopper bugs in nai’s 6.5.8 have not been fixed even as late as ckt–8. This causes me to harbor serious doubts about Imad’s commitment to secure software engineering practice.

I and others also have concerns about Imad’s knowledge of cryptography and cryptanalysis. As far back as 1996, the cryptanalytic community was strongly urging people to move away from md5. By 1999, nai recognized that md5 needed to be deprecated and supported only with legacy keys. This was also the attitude of the GnuPG crew for the entire duration of the GnuPG project’s existence. Imad was arguing as recently as 2002 that md5 was secure and that talk of its weakness was hype and fearmongering. When the Dobbertin attack and its consequences were explained to him, his response was to challenge with a “well, if you say it’s insecure, let’s see you break it!” Those responses were neither professional nor constructive.

The ckt builds have also not been peer reviewed to the extent of nai’s 6.5.8 or pgp Corporation’s 8.x or 9.x. Due to the legal problems associated with ckt, many cryppies and programmers — myself included, as well as the majority of cryppies I know — believe that studying the ckt codebase and/or publishing weaknesses would open the door to copyright infringement lawsuits of one form or another. It’s hard enough being a cryptographer nowadays without running afoul of the dmca; most professional cryptographers and cryptographic engineers don’t need the additional headache of downloading the ckt source in violation of copyright law.

Why shouldn’t I use idea?

As a matter of personal ethics, I refuse to support software patents. The idea algorithm is patented by Ascom–Tech A.G., and they have expressed their willingness to enforce it. Now, since I’ve paid for a licensed copy of pgp I have a license to use idea for any purpose I want… but still. There are better ciphers out there, ciphers unencumbered by patents. Let’s use them.

You know spooks, right?

It’s been my good fortune to know some members of the various intelligence communities. Many people know so–called “spooks”. My uncle served in military intelligence; does that make him a spook?

I have other friends who served in counterintelligence. Are they spooks?

What about police officers? Sheriff's deputies? Social workers? What about the neighborhood watch? What about teachers? What about the neighborhood gossip?

All of these are examples of people who make their living by collecting and distributing intelligence on people’s activities. Cops might monitor a street gang. A sheriff’s deputy might keep his eye on someone suspected of being a domestic abuser. Social workers keep their eyes open for unsafe and/or dangerous home situations. The neighborhood watch is alert for the presence of people who just “don’t belong” in the area. Teachers keep track of which students are doing well and which ones are in trouble.

Most of us are intelligence agents of one kind or another. Intelligence operations are what define life today. When you Google a blind date’s name, you’re conducting an intelligence operation that the cia of twenty years ago would’ve wept to have done so easily.

So, do I know spooks? No. I know people.

Yes, I know a lot of intelligence agents. Some of them work for the government. Most of them don’t.

What are they like?

Some are happily married. Some are unhappily married. Some are happily alone. Some are unhappily alone. Some are Republicans. Some are Democrats. Some are anarchists. Some of them have idealistic beliefs in law and order. Some of them are hardened cynics.

Am I talking about the nsa, or am I talking about my friends who are cops, judges, social workers, teachers and gossips?

All of the above. They’re all the same. They’re people.

But if you want to know about the nsa in particular, for some reason they overwhelmingly tend to be from middle–of–nowhere Minnesota and talk like they just stepped out of the movie “Fargo”. Strange, but true.


The following people have all contributed to the faq by things ranging from typo–spotting to giving me a heads–up on a really cool academic paper. They are presented in no particular order. Their inclusion in this list should not be interpreted as their endorsement of my writing.