Networking

Why not use OpenPGP for Web authentication?

TLS is the default solution for strong encryption on the Web, but perhaps it should not be. Why has no alternative arisen to challenge it, despite all of the problems of TLS encryption?

The TLS (also known by its predecessor's name, SSL) encryption protocol is in effect the way to provide strong encryption of Web authentication. While other options for encrypted authentication are available, such as HTTP digest authentication, they tend to use weaker schemes. The biggest problem with HTTP digest authentication, for instance, is the fact that it provides no mechanism for server identify verification. This makes it particularly vulnerable to man in the middle attacks.

Unfortunately, the TLS protocol is designed specifically with certificate authority PKI in mind as its method of server verification. This certificate system is a technically involved process that relies on trust of a third-party certificate authorization for server verification, akin in some respects to the web of trust model employed by OpenPGP public key encryption, but with more bureaucracy involved. The major point of departure from the web of trust is its breaking point, however: instead of relying on the judgment of people you personally choose to trust, you are instead expected to rely on self-serving commercial claims of authority to tell you who you can and cannot trust.

This CA system is baked into TLS and the HTTPS protocol used by browsers, such that even if your intent is to make use of a distributed correlative verification system like that offered by Perspectives or a web of trust adaptation such as that offered by Monkeysphere, you still need to go through the rigmarole of setting up your system as if you intend to use the CA PKI instead. Self-signed certificates are, outside of the payment requirements, essentially no easier to set up than CA-signed certificates. The static IP address requirements of traditional TLS certification are not relieved by self-signing your certificates, either -- a technical limitation that does not apply to other public key encryption protocols like OpenPGP.

What's wrong with TLS?

The problems with the TLS-based PKI are many, for many circumstances. One key problem is its unsuitability for simple, low-resource websites that require secure authentication but are not supported by a commercial model or deep-pockets hobbyist. Shared hosting, what amounts to the cheap way to provide a "real" website, presents some unpleasant barriers to the use of TLS protected authentication too (i.e., shared IP addresses). These difficulties in no way eliminate the need for authentication protected by server verification and strong encryption, of course.

This is in theory a solved problem. OpenPGP is an encryption protocol whose basic structure was designed by cypherpunk hero Phil Zimmerman about two decades ago. The use of a simple public key for half of the encryption/decryption process allows for an extremely flexible basis for out-of-band verification of the key's association with a given individual, organization, or site. Its publication via any of a wide range of distribution mechanisms can be used to offer outside verification. This could include the web of trust model that serves as the default key verification process for traditional OpenPGP, and the distributed system provided by Perspectives.

Theory and practice are rarely identical, and in practice this is not a solved problem at all. The fact of the matter is that TLS is so widely regarded as the only choice that there has been no serious movement on the development of alternatives -- even superior alternatives using already extant technologies. There is an obvious opportunity for an alternative that does not rely on corporate sponsorship for basic implementation to build visibility and mindshare, and without initially challenging the CA-based PKI for market share. All it would take is a pair of simple implementations of Web authentication to cover essentially all standard shared hosting cases where TLS is an unreasonable burden, in each case using an extant public key encryption protocol such as OpenPGP: a CGI implementation, and a PHP implementation. Given even moderate success, other implementations would soon follow -- Ruby on Rails, Django, CMS plug-ins, and so on.

Unfortunately, encryption implementations are hard to get right. A truly simple implementation of an OpenPGP Web authentication scheme would depend on assumptions about the kind of software running on the server, and would not be particularly portable -- or would depend on OpenPGP implementations being available in whatever language the Web developer is using on the server side. Either is a difficult proposition when dealing with shared hosting, where the Web developer has little or no control over what basic software is installed on the system, while the encryption libraries for most server side languages commonly used on shared hosting rely on the same outside tools (often GnuPG) for parts of their OpenPGP support.

Let us not forget the difficulty of dealing with the client. Modern, feature rich browsers have all come to support TLS encryption via the HTTPS URI schema. They do not, however, include built-in support for Web authentication via OpenPGP. Browsers with robust extension systems may offer a relatively easy way to attach such functionality to the browser, at least in some cases, but the extensions need to be written -- and doing so with ease would then depend on languages or discrete software installed on the client system, which is even less reliable a basis for authentication system dependency than the availability of software on shared hosting servers.

Challenges for an OpenPGP alternative

Here, at last, we find out why there is no OpenPGP-based -- or similarly stand-alone public key encryption -- alternative to TLS. A grass roots effort, such as could be provided by the open source development community, could conceivably provide the entire infrastructure needed to kick off such a bottom-up challenge to the practical hegemony of CA-based PKI in Web encryption. It is difficult to foresee this occurring without some forward thinking graduate student or major corporate sponsor getting things rolling, however, because of the sheer number of different pieces of a new infrastructure for Web authentication that would need to be built. Worse, even if such an open source system was built, it would not be unreasonable to lay even money it would be released under a license (probably copyleft) that would actually discourage use of the code in some circumstances, thus limiting its initial adoption. This does not even include the requirement for a standardized, easy to use out-of-band key verification system, which is not technically required for the system to work but would need to arise before widespread popularity amongst the technically uninclined would be particularly likely.

Despite all this, such an approach seems to be the single most likely, easiest way to provide a more "democratic" alternative to TLS, initially for authentication at least -- and perhaps eventually for full-session encryption.

Note that while OpenPGP provided the basic example for this article, the SSH protocol would serve similarly well in this role.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

16 comments
Nasal_passage
Nasal_passage

The monkeysphere project (http://web.monkeysphere.info) is working on this very problem. It has developed mechanisms for translating your traditional x509 certificates that you would use for https TLS connections, into their corresponding OpenPGP representation, as well as your ssh host keys. This is all done without the need to patch existing software. There is a firefox plugin to check the validity of a website using out-of-band web-of-trust verification, the only thing it needs is for you to adopt it, and help the project (in particular chrome extension authors are needed, but there are plenty of other ways to help!)

Neon Samurai
Neon Samurai

My certs don't care what IP is behind them provided the domain name matches. With vhosts, the limitation seems to be on the Apache side; it won't let me specify seporate certs for seporate vhosts on the same IP. Is this a specification of the TLC/SSL standard or a rule put in place for security purposes? The suggestion that TLS is bound to IP was waht prompted my question. It seems to be the resulting affect, I'm just curious what the actual cause is since the certs and client browser/ftp/email app don't care.

robo_dev
robo_dev

Of course the big issue is if and how Perspectives catches on. The last line of defense, as we all know, is the user, and unfortunately even simple attacks such as 'secure Phishing' using self-signed certs can go very far and do a lot of damage. The issue of proving an airtight end-to-end encrypted session between two parties will continue to be an endless cat-and-mouse game, so any line of defense that can be BOTH technically strong AND painless/brainless for the end user will be welcomed with open arms. Unfortunately, the only way to really secure a browser IS with robust add-ons. Company branded toolbars, for example, or tools like Perspectives, of course. The need to have these add-ons is both the problem and the solution...this brings up a whole raft of compatibility issues as a 'user using a web browser' can define at least a dozen combinations of browser and platform. No easy task.

apotheon
apotheon

The article is kind of rambly; I apologize for that. It is an important topic that I have put off discussing in a security article for far too long, though.

apotheon
apotheon

The Monkeysphere project was mentioned in the article. Of course, part of the problem with that approach is that it's only a half-measure. With Monkeysphere, you're still using TLS, which is a highly limited technology that has proven itself prone to exploit. TLS was designed specifically with the CA-run PKI verification architecture in mind, and as such the limitations imposed by use of that architecture are built into the protocol itself. Among these limitations are some that make it very difficult and/or expensive for a lot of Websites to employ encryption.

apotheon
apotheon

edit: My initial explanation was a mess and didn't really address the matter effectively. I'll try again. The problem is not Apache; it's the fact that TLS handshakes occur prior to operations at the application layer. This means it occurs after IP resolution, but before virtual host identification, which means that the IP address is what it has to go on for identifying a given host. The end result is that either a separate IP address has to be used for each host or the same certificate has to be used for all hosts at the single IP address -- virtual or otherwise.

Neon Samurai
Neon Samurai

I'd say that's step one. Most of the addons are because we still default to http and other clear-text protocols. The moment one has any kind of input field or dynamic content; they should be pushing over https. Boohoo, sftp is slower than ftp; get over it. And there is no reason that email should be in the clear between client/server or between server/server. This leaves the issue of cert trust though. I'm still trying to figure an answer to that; being the primary point of the article. Better PGP implementations may do it but we're still reaching out to a third party key server. At least it's not a protection racket like the CA based structures we're using now (which are all but useless when they simply hand top level certs to governments and server certs to any criminal that pays the fee and slaps up a now CA approved fraud site). "easy" is the key here though. Heck, I'm no slouch and even I can't get my desktop email client to properly use my certs; all encrypted email must be forwarded to and read through my notebook. Install Enigmail; import certs; open encrypted email.. lovely error "can not open key; passphrase missing".. yet the stupid thing won't bleeding ask me for the passphrase like it does properly on the other machine... grrr.. (yes, the relevant askpass packages are in place) And if I can't get something this simple working consistently, I really can't expect end users to do better.

Neon Samurai
Neon Samurai

I've not considerd certs linked to IP since I can take my cert from one IP and drop it on another provided the domain name remains the same. (eg. moving a site to a new server does not mean buying new certs). I guess the initial IP connection falls within this though. It focuses on the mechanics of the connection where I've been focusing on the configuration of the site and certs. browser asks for IP based on domain name browser establishes connection to IP browser provides http deamon with desired domain through header browser receives site content from http deamon if vhost domain is present My original thinking was that it was an intended implementation at the application layer to keep connections from being redirected within the same session. In that case though, it really only takes a rogue modification of the http server to break security so that does weaken it as a theory.

apotheon
apotheon

> The moment one has any kind of input field or dynamic content; they should be pushing over https. As long as the current limitations of HTTPS/TLS remain in place, we will not have universal use of encrypted protocols. The protocol was, in essence, specifically designed to accommodate an exclusionary adoption model, allowing business concerns to act as gatekeepers for the protocol. While new approaches to cutting out those gatekeepers are arising (such as Perspectives and Monkeysphere), the limitations built into the protocol to support them have not gone away. > Better PGP implementations may do it but we're still reaching out to a third party key server. There are many possible ways to get out-of-band confirmation, and most of them do not rely on a commercial "authority", so they avoid the pitfalls of the CA PKI model. > I can't get my desktop email client to properly use my certs What client is that? I'm guessing Thunderbird, by your reference to Enigmail. I've never had any problem getting OpenPGP working with Mutt and GnuPG, and as soon as netpgp gets up to snuff for that usage I don't really expect to have any problems there either. I've never actually used OpenPGP encryption with Thunderbird, though, so I'm not sure what's up with implementation and configuration there.

apotheon
apotheon

The key to the problem is pretty well illustrated by the confluence of two things. 1. Hostname resolution occurs at the application layer. 2. The T in TLS stands for "transport", as in "transport layer". The OSI model shows where the network (IP), transport (TCP and TLS), and application (DNS) layers land.

apotheon
apotheon

I'm just making things up here, but . . . A mechanism whereby public keys are associated with a domain name and its registrant, and the client can check the registrant and domain name pair against a separate whois check, coupled with a Perspectives-like or Monkeysphere-like verification process, would be pretty difficult to circumvent.

Neon Samurai
Neon Samurai

I'm using "certs" to refer to public/private certs be they PGP, SSL or whatever. Somehow I got stuck on the idea that using PGP certs would require broadcasting the server private cert; I'm blaming the confusion on sleep deprivation. Your clarification looks like it'd work perfectly though; client connects with server public cert and sends client public cert - encrypted communications insue. The trick would be handling the trust or otherwise insuring that a fraudulent server can't just generate it's own cert and feed it into a PGP server. I may have to go through a Lenny VM and not all it's changes. Good for my learning but boo that I can use it as a first step in hardening servers. I missed the 'not active upstream' in the Debian bugs too. The odd thing that threw me off was it being in Sqeeze testing, then removed just before release yet the Bastille holding in Unstable has since been bumped into Weezy Testing. Hm.. no quick to run Bastille or replacement hardening script.. disapointing but I can't blame Debian for not maintaining what stalled upstream. (now, as to why aircrack is not in place.. but at least the Lenny package installs clean on Squeeze so no functionality loss at least). Well, at least I can stop waiting and get on with writing custom hardening scripts into my own buildpacks.

apotheon
apotheon

I'm not sure what you mean about certificates. A public key for the Webserver would allow clients to send encrypted data to the server. As part of the initial handshake, using that cert to initiate it, an encrypted copy of the client's public key could be sent to the server, which would allow the server to send encrypted data to the client. Alternatively, public keys could be used to exchange a symmetric session key -- which is probably the way to go, given that public key encryption tends to be fairly processor-intensive, and less strong for a given key length. There are NVIDIA binary drivers for FreeBSD -- but no ATI/AMD binary drivers. This results in problems with hardware acceleration using ATI/AMD adapters. I personally do not much care about the hardware acceleration at this point; I can get by with software acceleration or partial hardware acceleration, as long as I get to use the full resolution of my adapter and display. I'm not sure whether there are problems with NVIDIA Optimus in particular, though. I have heard rumors there might be such an issue. The current major problem I've encountered with graphics drivers is for the new line of Intel HD -- particularly with the Ironlake chipset. As things stand, the Intel driver does not work with these adapters, and the vesa driver must be used instead, which maxes out at 1024x768 resolution. That's obviously less than ideal for a 1600x900 native resolution display, especially considering the problem of these resolutions using quite distinct aspect ratios (4:3 vs. 16:9). The FreeBSD Foundation just awarded a developer a grant to work on solving graphics problems that should fix that, though, so I guess there's hope for the near future. re: Bastille It looks like it might be a dead project -- which would explain why it has been removed from Debian's APT archives. The Bastille project page is talking about changes that are "coming January 14th, 2008." Meanwhile, a comment in a Debian bug report says "Upstream is no longer active with this tool and no updates are forthcoming." I don't know if the guy running the project is planning to get back to it or not, but things look pretty grim at the moment.

Neon Samurai
Neon Samurai

I was going to suggest a single server side cert but it's the client cert you need for the connection and a common client side cert for multiple clients defeats the purpose pretty quickly. Still, be very interesting to see an OpenSSH type implementation if someone can figure out the details. I can live with the older version of firefox so far but that may chang as html5 becomes more common. My problem may be the breakage with a dist-upgrade from Lenny where Bastille is already an established component. Hopefully it turns up in backports tuned for Squeeze or the Bastille folks do a Squeeze specific packaging of it. By the end of the year we'll know if Bastille is a common need or just me and a few other's without the dev skills to maintain it. Nvidia is one of the primary reasons I haven't looked further at the BSDs. I've not heard of them picking up that binary blob yet or Nvidia shipping one as they do for the Linux based distros. On the server side, I really should cut a VM and get on with it though; if only for interest sake.

apotheon
apotheon

An implementation of SHTTP would be nice, but chances are good its implementation would not work as well as we would like. The default way for SSH to work is tied to system accounts, which gets especially tricky for shared hosting where -- if you're lucky -- you get exactly one SSH account. A stand-alone SSH server that manages its own user account list, separate from the host system's, would be awfully nice for an SHTTP implementation. One of the benefits of a hypothetical SHTTP would be SSH compression -- a big help for bandwidth minimization on larger transfers. Another is the fact that the single most popular SSH implementation in the world, probably by at least an order of magnitude (if not two), is OpenSSH; well tested, fairly portable, copyfree licensed. re: Debian . . . fate has coerced me into using Debian a bit recently, and this will probably go on for a few months. I have not used Debian with regularity for about five years (though I've kept my hand in it mostly by helping others who have been using it, including my girlfriend). Recent reacquaintance with Debian is really making me miss FreeBSD where I'm currently using Debian, though. Things like my favorite window manager not being in APT archives and not building correctly, and having to go directly to Mozilla to get a newer Firefox (where Iceweasel is just a fox in weasel clothing) than 3.5, are not well calculated to keep me happy.

Neon Samurai
Neon Samurai

like sftp; http over ssh rather than http over tls/ssl? I'm not sure how that would work though as sftp is a designed function of openssh where the closest http related function is tunneling. Hm.. could ssh tunnels be opened on the fly at browser connection time under some rational implementation? I did notice my mentioning http/tls when tls was the topic of discussion but let it slide.. posting on too little coffee perhaps. Discussing encrypted protocols when one of them is the problem gets kind of tricky. I still think a big part of the problem is band-aid solutions to continue clear-text protocols rather than replace them properly. You guessed it but Enigmail was the dead giveaway. I backed up my home directory, installed Squeeze and dropped my ~/.directory back into place only to find out Thunderbird/Enigmail seem to have a broken connection to askpass (tried qt, gtk and plain old basic.. no go). The frustration is that my notebook has the exact same build; package list installed through the same build scripts and configuration. The only difference is that the Thunderbird/Enigmail combination started with a clean install rather than restored ~/.directory . My hope was that this was some kind of bug that would get squished before Squeeze replaced Lenny. Now that Squeeze is the new Stable, it's less excusable. I may have to backup my mail folders and start with a clean install if asked to return the notebook to work or switching machines to read email bugs me enough. I may have to look at mutt again or go back to my Pine origins. I guess a maximized terminal window gives me lots of space before wrapping text just as much as a maximized GUI client window does. No more X forwarding to check remote email either. I'm also dumbfounded that they didn't include Bastille in Debian 6. That's not like a "damn, that would have been nice" but a confounding WTF?!! I can backport or find my own aircrack (was in Deb6 until six months ago or so when pulled). Evelution-mapi would have been nice but not a deal breaker by any means. But no Bastille? I can sure tell you that my Lenny servers can't have a dist-upgrade unless the backports Bastille (Deb7) is maintained to behave properly with Deb6. It's disappointing since Bastille was a primary reason for switching to Debian and dropping it without alternative seems very much contrary to Debian's "Stability and Security" values. At least I have a year or so of transition time to trust it as a backport or replace it entirely. At this time, I've closed the tabs I had lurking on the "why is package X not in testing yet" site (one per program of interest) and added backports into my workstation and test VM's repository lists. Backports still appears emtpy and has yet to add Squeeze to the website package search but it's still pretty close to version launch so that's not yet a concern.

Editor's Picks