Security

Security or convenience: Does it have to be a choice?

There are stacks and stacks of research explaining why we prefer convenience over security. Does that mean we can't have both?

I just finished writing about a file-syncing app that dramatically simplifies my digital life and that of another 25 million people. Why that article? I was concerned. Noted experts were saying the application in question may have security issues.

Oh, great. That means I can use the app and fret about my data being safe. Or, I stop using the app and lose a very handy tool. Sorry, I want another option.

This is not a new problem. Cavemen tired of securing the cave door were making a choice--just like we do every day-between security and convenience? I'm wondering if there may be some middle ground. You know, like an automated door.

Why can't we have both?

Maybe we can. I bumped (metaphorically) into someone who thinks it is possible to have both security and convenience, at least digitally.

While reading my new Association for Computing Machinery (ACM) magazine, I noticed that ACM awarded Dr. Bryan Parno, the 2010 Doctoral Dissertation Award for Security Work for his work at Carnegie-Mellon University.

The abstract hooked me:

"I argue that we can resolve the tension between security and features by leveraging the trust users have in one device to enable them to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems."

I continued reading Dr. Parno's paper: Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers. And yes, the title foretells my struggle with the rest of the paper. One thing I did get. This is important. It might be our "digital" automated door.

I contacted Dr. Parno. He listened as I explained. I mentioned that I have questions; well several, actually.

Kassner: I like your example of user trust:

"To trust an entity X with their private data, users must believe that at no point in the future will they have cause to regret having given their data to X."

The essence of the paper is that it is possible to have both convenience as supplied by application features and assurance that our private and sensitive information will remain secure. Would you give us an overview of how you see that happening?

Parno: One of the observations that underlies much of the work in my thesis is that providing security on demand allows you to enjoy both security and features/performance.

For example, you probably care less about security when you're playing a video game or watching a movie than you do when you're entering your taxes. This is true even within a single application; i.e., you care more about security when you visit your bank's website than when you're reading the news.

By designing security systems that can be invoked on demand, we can protect your important activities without imposing on everything else you do on the computer, unlike previous security solutions which tended to be all or nothing.

Kassner: The dissertation is divided into the following topics:
  • Bootstrapping Trust in a Commodity Computer
  • On-Demand Secure Code Execution on Commodity Computers
  • Using Trustworthy Host-Based Information in the Network
  • Verifiable Computing: Secure Code Execution Despite Un-trusted Software and Hardware

I would like to look at each one individually. First, I understand "Bootstrapping Trust in a Commodity Computer" to be securing an individual's personal computer:

"We need a system to allow conscientious users to bootstrap trust in the local Trusted Platform Module (TPM), so that they can leverage that trust to establish trust in the entire platform."

How do you plan to accomplish this?

Parno: To trust a computer, you need to trust both the computer's hardware and its software. If you own the computer, you can take standard steps to ensure the hardware is trustworthy-you can buy the hardware from a reputable vendor, lock the doors to your house when you leave, only let people you trust tinker with it, etc. Fortunately, humans are pretty good at protecting their physical property.

However, we need a way to connect our trust in the physical hardware to the software that's running on the computer. Security devices, such as the Trusted Platform Module (TPM), are one way of making that connection between hardware and software.

Unfortunately, these devices speak binary, and they offer assurances via cryptography, two areas humans are quite bad at. Thus, we propose using one device that you trust, like your cell phone or a custom-built USB fob, to talk to the security device on your computer and let you know, based on the security device's report, whether it's safe to use your computer.

Part of my thesis examines ways to securely make the connection between your trusted device (e.g., your cell phone) and the security device on your computer. I discuss the advantages and disadvantages of solutions ranging from using a special-purpose hardware interface to connect the two devices, to stamping a 2-D barcode on the outside of your computer and using the camera on your cell phone to read the barcode.

Kassner: The next challenge, "On-Demand Secure Code Execution on Commodity Computers", refers to securely using unknown computers. You suggest a novel approach using something called Flicker.

Briefly what is Flicker, and how does it help?

Parno: Flicker is an architecture that leverages some new features that AMD and Intel added to CPUs in order to provide a secure execution environment on demand. The goal is to run a small piece of security-sensitive code in complete isolation from the rest of the software (and most of the hardware) on your computer, so that any malicious code you might have installed can't interfere with it.

For example, consider a VPN client that expects you to type in your username and password to log in to your company's network. If there's a bug in the UI portion of the client, or in your operating system, or in any of the device drivers installed in your operating system, or in any programs that run as administrator, then an attacker can potentially capture your username and password.

With Flicker, we extract the password-handling code from the VPN software, and run it in an isolated environment, so that bugs in all of the other software (even the operating system) won't affect the security of the password-handling code.

Flicker can also attest to what code was run and the fact that it ran in a protected environment. In other words, using Flicker, you can tell that the password dialogue box that just popped up is indeed running with Flicker protections, and/or your company can check that you typed your password into the correct password application, not into a piece of malicious code.

Kassner: "Using Trustworthy Host-Based Information in the Network" is your term for securing network traffic. You mention the process must have these properties:
  • Annotation Integrity: Malicious end hosts or network elements should be unable to alter or forge the data contained in message annotations.
  • Stateless In-Network Processing: To ensure the scalability of network elements that rely on end host information we seek to avoid keeping per-host or per-flow state on these devices.
  • Privacy Preservation: We aim to leak no more user information than is already leaked in present systems. In other words, we do not aim to protect the privacy of a user who visits a website and enters personal information.
  • Incremental Deployability: While we believe that trustworthy end host information would be useful in future networks, we strive for a system that can bring immediate benefit to those who deploy it.
  • Efficiency: To be adopted, the architecture must not unduly degrade client-server network performance.

Now that you have the endpoints secure, would you briefly explain how the above properties are implemented?

Parno: Many network protocols, particularly security-related protocols, spend a lot of resources trying to reconstruct information that's already known to the source of the network traffic.

For example, research shows that knowing how many emails your computer has sent recently and to how many different destinations is an excellent predictor of whether the email is spam. A recipient of any individual email has trouble assembling those statistics, but of course the sender can easily compute them.

By using an architecture like Flicker, we can have a trusted piece of code compile these statistics and attach cryptographically-authenticated summaries to each outbound email or network packet. Then, mail servers that receive email from me will see that I've only sent two emails in the last hour, and so my emails are less likely to be marked as spam.

You can use a similar approach for other protocols, like denial-of-service mitigation and worm prevention. Of course, we also have to be careful to preserve user privacy, through a combination of anonymity techniques, careful choice of statistics, and the use of small, isolated, verifiable code modules.

Kassner: Finally, "Verifiable Computing: Secure Code Execution Despite Un-trusted Software and Hardware". Which I understand to be about securely interacting with outsourced computers and networks. You intend to accomplish this using something called Yao's Garbled Circuits and homomorphic encryption.

I get homomorphic encryption, but what are garbled circuits and how do they help?

Parno: Garbled circuits are a clever technique developed by Professor Andrew Yao in the 1980's. They let two people compute an answer to a computation without revealing their inputs to the computation.

For example, suppose you and I want to learn which one of us is older than the other, but we're too embarrassed to say our ages directly. Using garbled circuits, we could exchange some information that would allow us to learn the answer (e.g., that I'm younger than you), but I wouldn't learn anything else about your age, and vice versa.

As part of my thesis work, I discovered that by modifying Yao's construction, I could create a protocol that allows you to outsource the computation of a function to another party and then verify the work was correct.

For instance, I might pay you to compute the Fourier transform of some data. When you return your answer, I want to make sure that it really is the transform I asked for on the data I provided, not a random number you chose in order to save money.

The modified version of Yao's protocol allows me to constrain the way in which you compute your answer, so that I can efficiently check your results when you're done. In contrast, fully-homomorphic encryption, on its own, only protects the secrecy of your data; it doesn't tell you anything about what type of computation someone else has done on your data.

Unfortunately, it turns out that just modifying Yao's protocol isn't enough. Creating the garbled circuit requires a lot of work, and once you compute a single answer for me, I have to create a whole new garbled circuit. We fix this by applying a layer of encryption on top of the protocol, so that we can reuse the circuit many times, and hence amortize the initial cost over many computations.

Final thoughts

It seems complicated, but taking a bite out of the age-old convenience versus security dilemma would make even the cave man proud.

I would like to thank Dr. Parno for his insight, allowing me to quote his thesis, and use his slides.

Update: I just learned that the file-syncing app I refer to experienced another security problem:

"Yesterday (June 19, 2011) we made a code update at 1:54pm Pacific time that introduced a bug affecting our authentication mechanism. We discovered this at 5:41pm and a fix was live at 5:46pm. A very small number of users (much less than 1 percent) logged in during that period, some of whom could have logged into an account without the correct password. As a precaution, we ended all logged in sessions."

We know this will continue to happen. We make mistakes. New approaches like Dr. Parno's are needed.

About

Information is my field...Writing is my passion...Coupling the two is my mission.

47 comments
jeferris
jeferris

My question is not whether his arguments are valid. They may be. But my issue is how do we, as ordinary people, find out who we can legitimately TRUST? Do we go with a legislated answer? 'Trust our GOV... they'll take care of you and protect you.' That statement right now can develop 101 flame wars from amongst any random 100 people, and not just supportive or even just about Airport Security. Do we trust Corporations? Microsoft says they are working on security as their number one priority. Linux says they've got a build for that. Apple says they've almost never been hacked so trust them. Or Norton now declaims 'we get what we pay for, so buy their products'. Or perhaps we must trust our members of the Fifth Wall... the press. umm, sorry. I read your opinion pieces, and appreciate that you may know more than I about alot of stuff, but I don't have any real way to prove just where your affiliations lie. Yes, the old cliche of 'follow the money' is always valid, but in todays financial world not everyone has full disclosure, nor full access to relevant information. So for this, or any other security system to Really Work, first you need to find a way for people to KNOW whom they can trust. And isn't that where the real 'security vs convenience' issue arises?

oldbaritone
oldbaritone

Every code can be broken. It's just a matter of how long it takes to break it. Years ago in the days of 4.77 Mhz PC's and 1MB RAM, 32-bit DES was considered "secure". It would take an eternity to crack it. Now in the days of GHz multi-core processors, GPUs and GB RAM the norms, DES is considered insecure and trivial to crack. Even 3DES is obsolete. So TPM is probably "secure" for now, but give it some time and newer, faster processors and cutting-edge hackers will break it. They always do. It's simple to keep a secret: don't tell anyone. Once you do, it's not a secret any more.

boxfiddler
boxfiddler

is near, if not always, a dependency trap. Convenience is worth avoiding.

AnsuGisalas
AnsuGisalas

Password managers. I must say I have trouble seeing how a mobile device can be the "trusted device". There's just so few ways of checking what it is doing. Who knows what it might be accidentally listening to? Not I.

snoop0x7b
snoop0x7b

TPM is inherently wrong because it can be used to stifle the free market and dictate what can and cannot be done with hardware that you own. Would Microsoft deny competitors signatures? Possibly? Would Apple? They've already done so on their phone platform. Here are some examples of where trusted computing is abused: - iPhone. Apple refuses to allow applications that compete with their applications into the app store, and you cannot get apps elsewhere (without exploiting the phone). - PS3. Sony removed the capability of running a third party OS on the PS3. Imagine a day in the future when you have to buy a special computer to be allowed to develop and run your OWN code. Who determines code is signed, who can sign it, and what can run? I'll give you a clue, it ain't the user. The only way TPM can be viable would be if the owner has the "master key" and thus the ability to determine whether or not their machine runs unsigned code. I actually like this author's solution to it because it seems to seek a middle ground, however, what Microsoft has most recently discussed is fully trusted (see Palladium project).

wdewey@cityofsalem.net
wdewey@cityofsalem.net

If you own the operating system you can intercept the keystrokes and video presented to the user. With these two combined you can re-create and duplicate any action the user takes. Physical separation is really the only truly secure way to operate which adds complexity and impacts convenience. Do some research on DEP (Data Execution Prevention) and then do a search on how to bypass it. It's not trivial, but it is possible. Bill

Tony Hopkinson
Tony Hopkinson

Be more secure by trusting someone more trustworthy, the only potential advantage is more granularity in terms of what you trust them with, if you can trust them.... Some one has to pluck age and only age out of each of our personal information, then we have to trust the calc does what it says it does, that when the calcs are compared nothing's being piggy backed, oh and that the mere request for the comparison, isn't valuable information in and of itself.... Security or conveninec isn't a technical argument it's a business one. Convenience is easier to do and easier to sell and for most more satisfying to purchase.

Spitfire_Sysop
Spitfire_Sysop

I like the caveman analogy. The same problems existed. You put a dog (proactive port monitoring firewall IPS\IDS) at the cave entrance and it might attack your friends (a false positive). The dog may also be distracted and not respond to an intruder (like a rootkit). There are convenient security programs but it's hard to not feel one step behind in the arms race. Behaviour analysis is the future of secure computing. We are slowly moving away from signature based detection. What do you think about ThreatFire? It's convenient and it can help provide a layer of security. http://www.threatfire.com/

Michael Kassner
Michael Kassner

A researcher may be onto something. What if we could have the features we want and security?

Michael Kassner
Michael Kassner

Some may agree that security is all or nothing. I suggest there are degrees. I see this approach as being another layer of protection, not the ultimate answer.

Michael Kassner
Michael Kassner

True, security is always an economic tradeoff: How much are you willing to invest to avoid what level of risk? However, that doesnt mean we shouldnt bother with security. In the physical realm, if someone really wants to burgle your house or steal your car, they can probably do it. And yet, we still use locks on both, and in practice, an astonishingly small percentage of houses are burgled or cars stolen. I dont think the inability to achieve perfection should prevent us from striving for something better than what we have today.

Michael Kassner
Michael Kassner

Cellphones are certainly growing more complicated, but they also include many security protections not standard on the desktop. You can also get away with using an untrusted cellphone to verify an untrusted computer (and vice versa) as long as you dont expect them both to be infected with malware at the same time, or at least that the malware on one isnt collaborating with malware on the other. If youre still uncomfortable with trusting the phone, then you could consider one of the special-purpose USB devices we discuss. They are likely to be more secure than a cellphone, but theyre one more thing you have to carry around. You have to start somewhere, but that choice depends on your preferences.

Michael Kassner
Michael Kassner

What exists does not work. This approach could and should usher in all sorts of new ideas and devices.

Michael Kassner
Michael Kassner

Saying the TPM is inherently wrong because it can be used for bad purposes (or purposes you disagree with) is like saying that cryptography is inherently wrong because criminals can use it to hide their nefarious activities. At the end of the day, the TPM is just a tool. Companies can choose to use it for purposes you disagree with, but they can do that through other means (like software based restrictions, obfuscation, or licensing agreements) as well. Part of our work is to demonstrate various ways in which tools like the TPM can be used to drastically improve security for end users. I see it as huge win that hardware companies are starting to take security seriously to the point where they re willing to make changes to their systems to accommodate it. As a side note, the TPM is an entirely passive device it doesnt actively prevent you from doing anything. It also ships off by default and only the owner of the machine can enable it. If you dont like the keys it ships with, you can generate your own. It also has a number of features for protecting privacy. Thus, you can always choose to use the TPM in ways that you feel comfortable with.

Michael Kassner
Michael Kassner

Establishing a trusted path (i.e., secure input and output) to the user is a great question and a challenging problem. In much of my work, I focused on server based applications where trusted path is less of a concern. However, some of my colleagues have looked at using an architecture like Flicker to protect password entry on website. Essentially, when you start typing your password (there are a variety of ways to detect this), your keystrokes are encrypted inside of a Flicker session, and the OS only receives opaque blobs. In other words, the OS knows that keys are being pressed, but it doesnt know which ones. When you submit the webform, the code inside the Flicker session annotates it with the encryption of your password, which the webserver can decrypt and check. Its a bit convoluted, but its one example of how you might make some headway on the trusted path problem. You can read more about it here: Bump in the Ether: A Framework for Securing Sensitive User Input. Jonathan M. McCune, Adrian Perrig, Michael K. Reiter. USENIX Annual Technical Conference, May 2006: http://www.ece.cmu.edu/~jmmccune/papers/mccunej_bite.pdf As a side note, DEP does mitigate one way in which to exploit buffer overflows, but unfortunately it doesnt stop the overflow, and there are other ways of using the overflow to your advantage that still work with DEP enabled.

Michael Kassner
Michael Kassner

If I understand correctly, using a secure device, homomorphic, encryption and garbled circuits would make any keystroke captures meaningless.

Michael Kassner
Michael Kassner

Tonys argument that be more secure by trusting someone more trustworthy, the only potential advantage is more granularity in terms of what you trust them with, if you can trust them.... The argument is more that you need to trust someone or something to start with its hard to build a secure system if its turtles all the way down. If you make that trusted device small and simple enough, its easier to believe/reason/prove that its secure. You can then use that device to reason on your behalf to assess the security of other devices or services. I dont quite follow the argument about the calculator. In terms of deployment, I think it will depend on a combination of consumer demand, legislative action, and/or a desire to compete on the security axis.

santeewelding
santeewelding

Your point about not a "technical argument". I would add that "business one" is too narrow. We so much as open our mouths about it and argument takes flight.

Michael Kassner
Michael Kassner

Morphing homomorphic encryption and garbled circuits together is quite new. You mentioned ages. Dr. Parno's example pointed to the fact that you will not have to divulge your age.

JCitizen
JCitizen

I am impressed with some of Emisoft's latest introductions to the market. I'm evaluating something called Mamutu, and am damned impressed at how fast it found all the DRM spies that were currently active in the RAM and CPU session. However, they may be integrating this into what used to be called A-Squared, and has changed names since the kernel update. Most folks would probably want that instead. It is totally behavior based detection, I get some white-listing updates, but other than that, it hardly needs any updating at all. Very light - fast solution, and can be used in conjunction with ANY current AV/AM solution.

AnsuGisalas
AnsuGisalas

It sounds McGuffinesque, like a conjurers trick : [i]-And see, they put this "Trust" here, and now I pick it up like this, and I move it over here, and presto, the "Trust" is on both this thing and on that thing[/i] He doesn't talk about trust like I think about trust. I think of something as either worthy of trust or not... he talks about it like it's a discreet binary quality to be transferred. The lack of compatibility between those views stops me from being able to understand his point. He may very well have a point, but it's out of my reach. But password managers work.

JCitizen
JCitizen

except hardware based. The system probably knows when KS is running but the spy doesn't unless he is trying to directly keystroke something into a form using RDP, and then he will notice the total gibberish he is creating! HA!! Of course the Keyscrambler logo is a dead giveaway. If Rapport is involve he won't see anything, he may not know a browser is open. At least the malware won't be aware of it anyway. I've tested keyscrambler and it passes all six of the AKLT security tests set out there for such solutions.

wdewey@cityofsalem.net
wdewey@cityofsalem.net

If the computer can't understand the keystrokes how can it process them (how does it get into and out of a secure mode)? Will they be hidden on the screen from the user or can video capture software display them? Will devices come with a pre-packaged key if not then there will have to be a secure key exchange? Bluetooth is supposed to do a secure key exchange, but at one time it was possible to force the devices to re-negotiate and the negotiation could be used to compromise the key and then the channel could be decrypted (not sure if this has been fixed). I would love to hear about a fool proof way to have just a single application that was always secure not matter the state of the OS, but I am really skeptical. The OS is what interfaces with every physical device. It is what send data to and retrieves data from every device. Even if the login to a VPN client wasn't captured it may be possible to use that secure channel by a virus once the path is open. DEP was touted as a way to keep buffer overflows from causing a vulnerability to be exploitable. Shouldn't that mean we don't need to worry about buffer overflows now? Why go through the work and effort of writing secure code? It's a more expensive process which isn't convenient for users. Security drives up cost and complexity which are the the bane of convenience. Bill

Tony Hopkinson
Tony Hopkinson

If you have a framework and infrastructure in place where the circuits and math can be just called, that's simple, no different to the service user than. EncryptedData = Encrypt(MyData); The implementation of encrypt as opaque to most of us and a muddle circuit... Verifying it, is even more highbrow. And if the service is provided by a third party, you've passed them your unencrypted data at some point, and proving that something naughty wasn't done with it at the same time, is again beyond 99.99% of consumers. So it's still all about trust. Personally If if I was selling this I'd really hit the granularity aspect. Not having to give 100's of organisations 'all' our identifying information, yet have them to still be able to identify us, with some sort of authorising 3rd party would be a huge boon. The thing is there are a lot of organisations that dervive huge financial benefits from having direct access to our identifying information. Customer apathy is not going to be the big problem, vested interest is. They'll probably buy up the legislature, and regulate against it...

Tony Hopkinson
Tony Hopkinson

convenience shoppers, is the estimation of value of our identifying information. Some one rings you up ask's you to confirm your id, or you ring them, they've got your phone number, they ask you for say first line of address, date of birth and perhaps mother's maiden name or some such. Answer those questions and you are you, or may be not.... Not being able to get something sorted without turning up with dental X-rays, sworn attestations from ten lawyers and a dog that doesn't growl at you, would be seriously inconvenient though wouldn't it. Security is and always will be a question of who you trust, when and with what. This stuff is just yet another how.... The granularity is nice, but the big boys we want to secure ourselves from, will find that thoroughly inconvenient.

Tony Hopkinson
Tony Hopkinson

using your age. They run a calc using their age. Then some third party compared the calcs. So do you trust the calc provider? Sure as heck we aren't doing the math... It's got a lot of advantages over just passing all your identity to some party, a question though, why are vendors going to provide this? Identity capture and data farming are very lucrative businesses.

Tony Hopkinson
Tony Hopkinson

authorisation from us. I trust Michael, he trusts you, all of a sudden by trusting Michael, I 've trusted you as well, even though I wouldn't know you from Adam... That's where the notion of granularity is so valuable. So I can trust Michael with my email address, if I get an email from you, one I know I shouldn't have trusted Michael with it, and two that's the only thing you know about me, so even if my trust was misplaced, my exposure to teh consequences of that errior is limited. Compare that to all the poor b'stards on playstation network... Names used only to illustrate a point, at no point am I saying either of you have proven unworthy of my trust... :(

AnsuGisalas
AnsuGisalas

Organized referral, to help give the consumer the benefit of a large database of advice, without forcing the consumer to go out and collect that database by themselves. That requires stronger vetting of the advice, since advice (trusted recommendation) will have an impact not only at the node where it is accepted, but also at each node downstream from there. Insofar as this could be going on anyway, but with more lackadaisical verification and control of acceptance, I can see the point now. Thanks Michael, and thanks to Dr. Parno, too.

Michael Kassner
Michael Kassner

Trust is transferred all of the time. If a friend of yours gives you a strong recommendation for someone you've never met, you probably instill a higher than average level of trust in that person, because of your trust in your friend. In other words, you trust your friend, your friend makes a recommendation, and now you trust the new person (maybe not as much as you trust your friend, but still more than average). Were attempting to do the same thing, but with stronger, more precise guarantees. We also use trust (or trusted) in the sense of Trusted Computing Base to mean the set of things or entities you trust for the correctness or security of some operation. In other words, if anything in that collection goes wrong (e.g., has a bug) or decides to betray you, you're out of luck. Thats why we study mechanisms for minimizing that Trusted Computing Base as much as possible.

AnsuGisalas
AnsuGisalas

Actually, I don't. Does this have to do with that "walled garden" and app whitelisting thing? That would explain why they think a cellphone can be a "trusted device". Do they also mean to make the "untouchable trusteree" a nastily entrenched root kit? They'd definitely need to hide it from malware, at least. I must say, it sounds like saying "Look, this guy here, in the black suit and dark shades, he's gonna follow you around, and you *know* you can trust him, coz he could kill you in a second... long as he hasn't killed you, you're good!"

Tony Hopkinson
Tony Hopkinson

Their entire argument in nutshell. You seem to have a problem with it! Welcome to the awkward squad. :p

AnsuGisalas
AnsuGisalas

What does trust mean in this context? Another thing - what I understood from the article, the idea is to have an automated rubber stamp, which approves of things without user intervention. How many security updates do we see to fix "remote activation of code without user consent"? Why will removing even more consent, or making it implicit, help make things more secure?

seanferd
seanferd

and back doors demanded by government or industry. And I don't trust TPM/Trusted Computing in the first place. It has issues. Privacy issues included. in practice, lots of software installation and operation issues for consumers. And there are too many parties to trust, including parties I've probably never heard of. I don't really even want those on my machines. If there were an open source hardware solution to which I could agree, and over which I have some measure of control, OK. Or, for a proprietary solution like the current model, something portable and not device-integrated I mike only view with mild distaste. This comment is less to do with Dr. Parnow's original research, and more to do with TPM and the possible natures of hardware and code based on his research, once licensed to OEMs/vendors. And maybe the uses to which it could be put. (Unintended consequences and what.) And I don't forget the Rutlowskas of the world, because some of them have interests which do not intersect with those of citizen consumers. I'd like to see better. Here's hoping.

AnsuGisalas
AnsuGisalas

Clear as binary ink... who verifies the verifiers?

Michael Kassner
Michael Kassner

Homomorphic encryption and garbled circuits eliminate the need for trust, if I understand.

Tony Hopkinson
Tony Hopkinson

You can trust that there is a way to not have the other guy know your age, and you can trust that the provider of this facility is using it...... Here's a question if someone claims to be using 128 bit encryption on your data, do you verify it? Probably not, so you are trusting them. If you want something kept secret, don't tell anyone, not even a hint. Set up this compare the age thingy. With a big enough database, I can order it by age, and I'm in it and I know mine..... As you say security is trust, and when it comes down to it that's you, me, Facebook and used car salesmen, naff all to do with tech.

seanferd
seanferd

with anything. But you knew that. My personal feeling is that security is never really all that inconvenient anyway. It's just one of those things that exasperates a lot of people at first mention, kind of like paying attention while driving. There's a perceived inconvenience well beyond any actual inconvenience, and far beyond the benefits. No doubt, some security solutions are needlessly complex and inconvenient, or not really secure at all, but they tend to die off. I sometimes find the need to eat inconvenient, but I can't get away without eating for too long.

Michael Kassner
Michael Kassner

I always enjoy your thought-provoking comments. Thanks. The difficulty of achieving both is why I alluded to the caveman. We been fighting this for ever.

wdewey@cityofsalem.net
wdewey@cityofsalem.net

Hope I didn't come off as abrasive. That was not my intent. I just don't think security and complete convenience is possible. I think that there is a balance and that security doesn't have to be convoluted in most cases. I will try to find some time to look at his research, but this is kind of the holy grail of security. If a web provider can make an application secure without user intervention (ie the user securing their own machine) then it would mean big business for may online applications. Bill

Michael Kassner
Michael Kassner

I am by no means the expert. I will pass your comments on to Dr. Parno. I also would suggest reading his thesis. I think most of your questions are answered in it.

Michael Kassner
Michael Kassner

In my experience; there is a point, where convenience is trumped by security. Ironically, due to the diligence of security people, that point is slow to arrive. But, when it does, the tide will turn.

Tony Hopkinson
Tony Hopkinson

Or at least very rarely, and very carefully and only with highly reputable vendors. I think one of the things that sealed it for me, was when I set up a business account, and the bank's advisor was telling me all about how secure their on-line stuff, but didn't know what a key logger was.... Very few consumers demand security, most demand a facade, which is fortunate because that's all they are getting. Security is in the same bag as quality, the "good enough" one, and that's a totally non-technical consumer driven appreciation.

Michael Kassner
Michael Kassner

Will demand it. If they can't trust the Internet, they will stop using it for purchases. It's not a trend, but I know several people that will not use it. Even though I tell them that using their credit card anywhere is almost the same thing.

Editor's Picks