There are stacks and stacks of research explaining why we prefer convenience over security. Does that mean we can't have both?
I just finished writing about a file-syncing app that dramatically simplifies my digital life and that of another 25 million people. Why that article? I was concerned. Noted experts were saying the application in question may have security issues.
Oh, great. That means I can use the app and fret about my data being safe. Or, I stop using the app and lose a very handy tool. Sorry, I want another option.
This is not a new problem. Cavemen tired of securing the cave door were making a choice--just like we do every day-between security and convenience? I'm wondering if there may be some middle ground. You know, like an automated door.
Why can't we have both?
Maybe we can. I bumped (metaphorically) into someone who thinks it is possible to have both security and convenience, at least digitally.
While reading my new Association for Computing Machinery (ACM) magazine, I noticed that ACM awarded Dr. Bryan Parno, the 2010 Doctoral Dissertation Award for Security Work for his work at Carnegie-Mellon University.
The abstract hooked me:
"I argue that we can resolve the tension between security and features by leveraging the trust users have in one device to enable them to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems."
I continued reading Dr. Parno's paper: Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers. And yes, the title foretells my struggle with the rest of the paper. One thing I did get. This is important. It might be our "digital" automated door.
I contacted Dr. Parno. He listened as I explained. I mentioned that I have questions; well several, actually.Kassner: I like your example of user trust:
"To trust an entity X with their private data, users must believe that at no point in the future will they have cause to regret having given their data to X."
The essence of the paper is that it is possible to have both convenience as supplied by application features and assurance that our private and sensitive information will remain secure. Would you give us an overview of how you see that happening?Parno: One of the observations that underlies much of the work in my thesis is that providing security on demand allows you to enjoy both security and features/performance.
For example, you probably care less about security when you're playing a video game or watching a movie than you do when you're entering your taxes. This is true even within a single application; i.e., you care more about security when you visit your bank's website than when you're reading the news.
By designing security systems that can be invoked on demand, we can protect your important activities without imposing on everything else you do on the computer, unlike previous security solutions which tended to be all or nothing.Kassner: The dissertation is divided into the following topics:
- Bootstrapping Trust in a Commodity Computer
- On-Demand Secure Code Execution on Commodity Computers
- Using Trustworthy Host-Based Information in the Network
- Verifiable Computing: Secure Code Execution Despite Un-trusted Software and Hardware
I would like to look at each one individually. First, I understand "Bootstrapping Trust in a Commodity Computer" to be securing an individual's personal computer:
"We need a system to allow conscientious users to bootstrap trust in the local Trusted Platform Module (TPM), so that they can leverage that trust to establish trust in the entire platform."
How do you plan to accomplish this?Parno: To trust a computer, you need to trust both the computer's hardware and its software. If you own the computer, you can take standard steps to ensure the hardware is trustworthy-you can buy the hardware from a reputable vendor, lock the doors to your house when you leave, only let people you trust tinker with it, etc. Fortunately, humans are pretty good at protecting their physical property.
However, we need a way to connect our trust in the physical hardware to the software that's running on the computer. Security devices, such as the Trusted Platform Module (TPM), are one way of making that connection between hardware and software.
Unfortunately, these devices speak binary, and they offer assurances via cryptography, two areas humans are quite bad at. Thus, we propose using one device that you trust, like your cell phone or a custom-built USB fob, to talk to the security device on your computer and let you know, based on the security device's report, whether it's safe to use your computer.
Part of my thesis examines ways to securely make the connection between your trusted device (e.g., your cell phone) and the security device on your computer. I discuss the advantages and disadvantages of solutions ranging from using a special-purpose hardware interface to connect the two devices, to stamping a 2-D barcode on the outside of your computer and using the camera on your cell phone to read the barcode.Kassner: The next challenge, "On-Demand Secure Code Execution on Commodity Computers", refers to securely using unknown computers. You suggest a novel approach using something called Flicker.
Briefly what is Flicker, and how does it help?Parno: Flicker is an architecture that leverages some new features that AMD and Intel added to CPUs in order to provide a secure execution environment on demand. The goal is to run a small piece of security-sensitive code in complete isolation from the rest of the software (and most of the hardware) on your computer, so that any malicious code you might have installed can't interfere with it.
For example, consider a VPN client that expects you to type in your username and password to log in to your company's network. If there's a bug in the UI portion of the client, or in your operating system, or in any of the device drivers installed in your operating system, or in any programs that run as administrator, then an attacker can potentially capture your username and password.
With Flicker, we extract the password-handling code from the VPN software, and run it in an isolated environment, so that bugs in all of the other software (even the operating system) won't affect the security of the password-handling code.
Flicker can also attest to what code was run and the fact that it ran in a protected environment. In other words, using Flicker, you can tell that the password dialogue box that just popped up is indeed running with Flicker protections, and/or your company can check that you typed your password into the correct password application, not into a piece of malicious code.Kassner: "Using Trustworthy Host-Based Information in the Network" is your term for securing network traffic. You mention the process must have these properties:
- Annotation Integrity: Malicious end hosts or network elements should be unable to alter or forge the data contained in message annotations.
- Stateless In-Network Processing: To ensure the scalability of network elements that rely on end host information we seek to avoid keeping per-host or per-flow state on these devices.
- Privacy Preservation: We aim to leak no more user information than is already leaked in present systems. In other words, we do not aim to protect the privacy of a user who visits a website and enters personal information.
- Incremental Deployability: While we believe that trustworthy end host information would be useful in future networks, we strive for a system that can bring immediate benefit to those who deploy it.
- Efficiency: To be adopted, the architecture must not unduly degrade client-server network performance.
Now that you have the endpoints secure, would you briefly explain how the above properties are implemented?Parno: Many network protocols, particularly security-related protocols, spend a lot of resources trying to reconstruct information that's already known to the source of the network traffic.
For example, research shows that knowing how many emails your computer has sent recently and to how many different destinations is an excellent predictor of whether the email is spam. A recipient of any individual email has trouble assembling those statistics, but of course the sender can easily compute them.
By using an architecture like Flicker, we can have a trusted piece of code compile these statistics and attach cryptographically-authenticated summaries to each outbound email or network packet. Then, mail servers that receive email from me will see that I've only sent two emails in the last hour, and so my emails are less likely to be marked as spam.
You can use a similar approach for other protocols, like denial-of-service mitigation and worm prevention. Of course, we also have to be careful to preserve user privacy, through a combination of anonymity techniques, careful choice of statistics, and the use of small, isolated, verifiable code modules.Kassner: Finally, "Verifiable Computing: Secure Code Execution Despite Un-trusted Software and Hardware". Which I understand to be about securely interacting with outsourced computers and networks. You intend to accomplish this using something called Yao's Garbled Circuits and homomorphic encryption.
I get homomorphic encryption, but what are garbled circuits and how do they help?Parno: Garbled circuits are a clever technique developed by Professor Andrew Yao in the 1980's. They let two people compute an answer to a computation without revealing their inputs to the computation.
For example, suppose you and I want to learn which one of us is older than the other, but we're too embarrassed to say our ages directly. Using garbled circuits, we could exchange some information that would allow us to learn the answer (e.g., that I'm younger than you), but I wouldn't learn anything else about your age, and vice versa.
As part of my thesis work, I discovered that by modifying Yao's construction, I could create a protocol that allows you to outsource the computation of a function to another party and then verify the work was correct.
For instance, I might pay you to compute the Fourier transform of some data. When you return your answer, I want to make sure that it really is the transform I asked for on the data I provided, not a random number you chose in order to save money.
The modified version of Yao's protocol allows me to constrain the way in which you compute your answer, so that I can efficiently check your results when you're done. In contrast, fully-homomorphic encryption, on its own, only protects the secrecy of your data; it doesn't tell you anything about what type of computation someone else has done on your data.
Unfortunately, it turns out that just modifying Yao's protocol isn't enough. Creating the garbled circuit requires a lot of work, and once you compute a single answer for me, I have to create a whole new garbled circuit. We fix this by applying a layer of encryption on top of the protocol, so that we can reuse the circuit many times, and hence amortize the initial cost over many computations.
It seems complicated, but taking a bite out of the age-old convenience versus security dilemma would make even the cave man proud.
I would like to thank Dr. Parno for his insight, allowing me to quote his thesis, and use his slides.Update: I just learned that the file-syncing app I refer to experienced another security problem:
"Yesterday (June 19, 2011) we made a code update at 1:54pm Pacific time that introduced a bug affecting our authentication mechanism. We discovered this at 5:41pm and a fix was live at 5:46pm. A very small number of users (much less than 1 percent) logged in during that period, some of whom could have logged into an account without the correct password. As a precaution, we ended all logged in sessions."
We know this will continue to happen. We make mistakes. New approaches like Dr. Parno's are needed.