Wired Magazine's "blog network" ran a story early this month about encrypted webmail provider Hushmail. The company's marketing is very heavy with the "your emails are safe and nobody can read it" rhetoric, going so far as to say:
not even a Hushmail employee with access to our servers can read your encrypted e-mail
As the Wired weblog article points out, though, this is apparently nothing more than exaggeration and outright falsehood. Despite the fact that even Hushmail supposedly cannot read your emails, the company reportedly turned over 12 CDs full of saved email correspondence from three Hushmail accounts to Canadian officials, complying with a court order. There are some reasonable caveats to the statement that Hushmail (the company) cannot read emails sent via Hushmail (the software and service), and I'll get to those in a moment. First, let's talk about relevance.
Some of you, reading that Wired article, may think "Well that doesn't apply to me. I'm not a black market steroid dealer." On the other hand, at the time the court order was issued, the targets of this surveillance weren't known steroid dealers, either. They were just suspected steroid dealers. Innocent people get investigated all the time; the hope is that they're determined to be innocent before the investigation ruins their lives.
Let's assume the system always works in that regard — not just that a trial in a court of law would find you innocent, but even that law enforcement officers determine your innocence early enough that it never gets to court, even early enough that the school where you teach third grade children never finds out you were naked at Woodstock. Let's look at the privacy angle.
Regardless of what an investigation determines about your guilt or innocence, if the investigators get access to nominally encrypted emails, your privacy is breached. It's too late to undo that damage. Even if they're investigating the wrong person, and even if the information in your emails not only doesn't prove you're guilty but isn't even useful in proving you're innocent, if your email correspondence was recorded on some of those twelve CDs, someone read it. Not only that, but for someone in government to read it, someone at Hushmail had to recover it — which means it is recoverable. If it can be recovered once, it can obviously be recovered again.
So much for privacy.
Now for the caveats. Hushmail's claims about how not even Hushmail employees can read your email assume you are using its Java-based client. The suspected steroid dealers in question were apparently using the server-side encryption option. Apparently, a lot of people use this option because they do not like the hassle of downloading and installing a JVM and using Hushmail's Java-based interface. I don't really blame them for that, but using the server-side encryption system punches a great big hole in your assumed privacy. The fact that the email is initially sent to the server over an SSL-encrypted connection is designed to ensure that Hushmail receives the text of your email without an outsider being able to eavesdrop. It doesn't in any way protect against an insider at Hushmail being able to read the text of the email — and apparently Hushmail stores the emails in a readable form after they have been encrypted and sent on to the destination.
One might think this just means that you shouldn't use the server side encryption option if you actually care about privacy for your emails. This is true, as far as it goes, except for that word "just". It goes beyond that.
Unlike open source OpenPGP software like GnuPG, Hushmail's Java client is not subject to public scrutiny. Aside from the obvious problems of encryption software that doesn't trust its users, there's also the simple fact that you do not really know it is doing what you expect, nor do you have any reasonable way to find out. Source code that is open to review but not available to be compiled and used directly (like Hushmail's Java client) — where providing source code is separate from running compiled software — provides nothing more than an illusion of openness. In addition, keeping software up-to-date with regard to security patches is important, but it can also be equivalent to an attack vector itself, at least in the case of closed-source software designed to provide security and privacy. After all, if a closed source encryption software vendor decides it needs the ability to recover plain text from emails encrypted by the client software, it can always just push out a "security patch" that allows it to harvest private encryption keys. The end user need never know. Worse yet, Hushmail's Java interface uses an applet — not a locally installed application — which means that one never even really knows that the Java applet being executed this time is the same one that was used last time, because it is downloaded as compiled bytecode all over again every time it is used.
In IT security as in any other field, for maximum effectiveness one needs to let an expert do much of the work. There just isn't enough time in the day for everyone to do everything oneself, nor even to learn enough to be able to do everything oneself if one had more time in the day to do it. It is for this reason that I use a lot of encryption software written by other people, rather than writing it all myself, and I do not expect any of you to behave any differently when choosing encryption solutions.
There are things you can, and should, do yourself if at all possible. What those things are may vary from case to case, based on your specific privacy needs. Sorting out what you need to do for yourself depends on the ability to correctly judge where you reach a point of diminishing returns. If you do not want people you don't even know to be able to trivially decrypt your emails because they pass through those people's servers, however, it behooves you to employ encryption systems that do not rely solely on the good will of those people to maintain your privacy.
You may not realize it yet, but you can gain many of the benefits of doing things yourself without actually doing them. The simple fact that you can (at least in theory) do something yourself is often helpful in providing some of the benefit of actually doing it. Even more of that benefit can often be gained by dint of the probability that someone else is doing it, even if you have no contact with any such people and cannot name them. The following is a list of three examples. The first is an example of what you could easily do yourself. The second is an example of something you can benefit from someone else doing. The third is an example of something you can benefit from being able to do, in theory, even if nobody does it.
- Use encryption software that gives you direct control over end-to-end encryption privacy, without a middle man doing all the "hard" stuff for you. For instance, use GnuPG and your local mail user agent software to encrypt and send emails without any reliance on a third-party encryption "service" provider like Hushmail.
- Enjoy the security benefits of a community of thousands of developers who examine the source code and operation of the encryption software you use for signs it misbehaves in some way. With any popular open source software (such as GnuPG), there are enough people poring over the source code in any give day that you can be pretty sure no intentional security holes will survive and anything accidental is likely to be caught by a "good guy" before a "bad guy" can find it. Security patches will tend more often to be effective and quick.
- Ensure you have access to source code yourself. If for some reason you have to use a closed source application for your mail user agent, and if you are buying software for use by your company, you can often arrange to license access to the source code and compile the application yourself — so long as you do not try to modify it or redistribute any part of it. This provides you the ability to personally do the same thing that the open source community at large does with open source software. Even if you do not actually intend to pore over source code looking for intentional back doors, getting access to the source code in this manner "just in case" provides some fairly strong assurance that the providing vendor is not trying to hide anything from you. Of course, as with the hypothetical client patch issue above, this doesn't help if you do not have the option of downloading patch source and recompiling the application yourself rather than just applying binary patches.
These three examples display decreasing levels of certainty, in some respects. In the first example, you know for a fact that anything you understand well enough to be able to judge for yourself will be secure. In the second example, the likelihood of something slipping by people who know enough to be able to accurately judge the software is not as certainly low, but it is generally so low as to be almost indistinguishable from being an expert in the field and doing all the work yourself. In the last example, you're really just hoping the vendor is both really good at what it does and, if not necessarily honest, at least afraid you'll figure it out, if it isn't. All three are better than the option of just sending your emails to some server provided by a service vendor and hoping they aren't reading your emails before encrypting them, or even downloading the vendor's (closed source) client software and simply trusting that it does what they say it does.
There is an exception to that general downward trend in assurance of security for those examples, of course. The exception exists between the first and second, in certain circumstances:
The second example is based on the idea that you can achieve the same effect as doing the work yourself by having others do so as proxies. Any one other person may not be trustworthy, and thus each individual proxy for your effort is not as good at providing the assurance you want as you, yourself, could be. The number of proxies, however, at least balances this out: a thousand people you do not know, and who in general do not have inherent motivations to pretend a vulnerability does not exist, can probably be trusted in the aggregate where they are not necessarily trustworthy as individuals. Such a widely distributed system of peer review provides a built-in set of checks and balances. While the trustworthiness of the crowd may not be literally 100%, the difference is so far to the right of the decimal point that it is functionally indistinguishable for most (if not all) purposes.
There's more to it than that, however: a popular open source project is probably more secure than any equivalent piece of software you write yourself will ever be. The reason for this ties in to the matter of having enough time to learn how to do everything yourself. Among all those open source developers working on (or even just examining) an open source encryption tool, many of them will know things you do not. All that varied expertise is lent to the task of checking out the software to make sure it works as advertised. Regardless of any matters of trust in the intentions of the developers, you personally quite simply lack the collected expertise of the people working on a popular open source project.
Of course, there is one way to beat even that level of protection:
Run a popular open source project yourself. Start it from scratch and build up the software and the community around it. That way, you're effectively doing it yourself — without giving up the collected expertise of a strong open source developer community. There's only so much time in your day for devoting to open source project development, so if you go this route make sure you choose your projects well.
For maintaining reasonable assurance of privacy, of course, you do not need to go that far. You definitely need to have some hand in controlling the process yourself, though — or you could find out that some private email service vendor is really only providing the illusion of privacy. If you didn't care about that, you wouldn't be trying to encrypt your email in the first place.
(NOTE: As I believe is the case for Ryan Singel, the author of the Wired weblog article, this incident actually increased my respect for Hushmail. I had little doubt before these events that services like Hushmail would, as a rule, have the ability and will to hand over whatever was demanded by a court order. What I did not expect was the frank openness with which Hushmail executives discuss the company's policies with regard to law enforcement cooperation. Such openness provides the customer with a very clear idea of exactly how far one's trust in the company's dedication to one's privacy should go. That kind of clarity is very hard to come by. Of course, Hushmail could be bought out tomorrow and change all its policies — which would mean all bets are off. There's still no such thing as a trusted brand.)
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.