Security optimize

Are users right in rejecting security advice?

Michael Kassner considers a report that calls into question most of the security advice doled out to users, who in the cost-benefit analysis, may be right in ignoring most of it. What do you think?

Should you change your passwords often? What's the risk if you don't? Little did I know, listening to one podcast would cause me to rethink how I would answer those questions.

---------------------------------------------------------------------------------------

I now understand why my friend insisted I listen to Episode 229 of the Security Now series. He wanted to introduce me to Cormac Herley, Principle Researcher at Microsoft and his paper, "So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users."

Dr. Herley introduced the paper this past September at the New Security Paradigms Workshop, a fitting venue. See if you agree after reading the group's mandate:

"NSPW's focus is on work that challenges the dominant approaches and perspectives in computer security. In the past, such challenges have taken the form of critiques of existing practice as well as novel, sometimes controversial, and often immature approaches to defending computer systems.

By providing a forum for important security research that isn't suitable for mainstream security venues, NSPW aims to foster paradigm shifts in information security."

Herley's paper is of special interest to the group. Not only does it meet one of NSPW's tenets of being outside the mainstream. It forces a rethink of what's important when it comes to computer security.

Radical thinking

To get an idea of what the paper is about, here's a quote from the introduction:

"We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot."

The above diagram (courtesy of Cormac Herley) shows what he considers as direct and indirect costs. So, is Herley saying that heeding advice about computer security is not worth it? Let's find out.

Who's right

Researchers have different ideas as to why people fail to use security measures. Some feel that regardless of what happens, users will only do the minimum required. Others believe security tasks are rejected because users consider them to be a pain. A third group maintains user education is not working.

Herley offers a different viewpoint. He contends that user rejection of security advice is based entirely on the economics of the process. He offers the following as reasons why:

  • Users understand, there is no assurance that heeding advice will protect them from attacks.
  • Users also know that each additional security measure adds cost.
  • Users perceive attacks to be rare. Not so with security advice; it's a constant burden, thus costs more than an actual attack.
To explain

As I read the paper, I sensed Herley was coaxing me to stop thinking like an IT professional and start thinking like a mainstream user. That way, I would understand the following:

  • The sheer volume of advice is overwhelming. There is no way to keep up with it. Besides that, the advice is fluid. What's right one day may not be the next. I agree, this link is to US-CERT security bulletins for just the week of March 1, 2010.
  • The typical user does not always see benefit from heeding security advice. I once again agree. Try to explain to someone who had a password stolen by a key logger, why a strong password is important.
  • The benefit of heeding security advice is speculative. I checked and could not find significant data on the number and severity of attacks users encounter. Let alone, data quantifying positive feedback from following security advice.
Cost versus benefit

I wasn't making the connection between cost-benefit trade-offs and IT security. My son, an astute business-type, had to explain that costs and benefits do not always directly refer to financial gains or losses. After hearing that, things started making sense. One such cost analysis was described by Steve Gibson in the podcast.

Gibson simply asked, how often do you require passwords to be changed? I asked several system administrators what time frame they used, most responded once a month. Using Herley's logic, that means an attacker potentially has a whole month to use the password.

So, is the cost of having users struggle with new password every month beneficial? Before you answer, you may also want to think about bad practices users implement because of the frequent-change policy:

  • By the time a user is comfortable with a password, it's time to change. So, users opt to write passwords down. That's another whole debate; ask Bruce Schneier.
  • Users know how many passwords the system remembers and cycle through that amount, which allows them to keep using the same one.

Is anything truly gained by having passwords changed often? The only benefit I see is if the attacker does not use the password within the password-refresh time limit. What's your opinion? Is changing passwords monthly, a benefit or a cost?

Dr. Herley does an in-depth cost-benefit analysis in three specific areas, password rules, phishing URLs, and SSL certificate errors. I would like to spend some time with each.

Password rules

Password rules place the entire burden on the user. So, they understand the cost from having to abide by the following rules:

  • Length
  • Composition (e.g. digits, special characters)
  • Non-dictionary words (in any language).
  • Don't write it down
  • Don't share it with anyone
  • Change it often
  • Don't re-use passwords across sites

The report proceeds to explain how each rule is not really helpful. For example, the first three rules are not important, as most applications and Web sites have a lock out rule that restricts access after so many tries. I already touched on why "Change it often" is not considered helpful.

All said and done, users know that strictly observing the above rules is no guarantee of being safe from exploits. That makes it difficult for them to justify the additional effort and associated cost.

Phishing URLs

Trying to explain URL spoofing to users is complicated. Besides, by the time you get through half of all possible iterations, most users are not listening. For example, the following slide (courtesy of Cormac Herley) lists some spoofed URLs for PayPal:

To reduce cost to users, Herley wants to turn this around. He explains that users need to know when the URL is good, not bad:

"The main difficulty in teaching users to read URLs is that in certain cases this allows users to know when something is bad, but it never gives a guarantee that something is good. Thus the advice cannot be exhaustive and is full of exceptions."

Certificate errors

For the most part, people understand SSL, the significance of https, and are willing to put up with the additional burden to keep their personal and financial information safe. Certificate errors are a different matter. Users do not understand their significance and for the most part ignore them.

I'm as guilty as the next when it comes to certificate warnings. I feel like I'm taking a chance, yet what other options are available? After reading the report, I am not as concerned. Why, statistics show that virtually all certificate errors are false positives.

The report also reflects the irony of thinking that ignored certificate warnings will lead to problems. Typically, bad guys do not use SSL on their phishing sites and if they do, they are going to make sure their certificates work, not wanting to bring any undue attention to their exploit. Herley states it this way:

"Even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical."

Outside the box

There you have it. Is that radical-enough thinking for you? It is for me. That said, Dr. Herley offers the following advice:

"We do not wish to give the impression that all security advice is counter-productive. In fact, we believe our conclusions are encouraging rather than discouraging. We have argued that the cost-benefit trade off for most security advice is simply unfavorable: users are offered too little benefit for too much cost.

Better advice might produce a different outcome. This is better than the alternative hypothesis that users are irrational. This suggests that security advice that has compelling cost-benefit trade off has real chance of user adoption. However, the costs and benefits have to be those the user cares about, not those we think the user ought to care about. "

Herley offers the following advice to help us get out of this mess:

  • We need an estimate of the victimization rate for any exploit when designing appropriate security advice. Without this we end up doing worst-case risk analysis.
  • User education is a cost borne by the whole population, while offering benefit only to the fraction that fall victim. Thus the cost of any security advice should be in proportion to the victimization rate.
  • Retiring advice that is no longer compelling is necessary. Many of the instructions with which we burden users do little to address the current harms that they face.
  • We must prioritize advice. In trying to defend everything we end up defending nothing. When we provide long lists of unordered advice we abdicate all opportunity to have influence and abandon users to fend for themselves.
  • We must respect users' time and effort. Viewing the user's time as worth $2.6 billion an hour is a better starting point than valuing it at zero.
Final thoughts

The big picture idea I am taking away from Dr. Herley's paper is that users have never been offered security. All the advice, policies, directives, and what not offered in the name of IT security only promotes reduced risk. Could changing that be the paradigm shift needed to get information security on track?

I want to thank Dr. Cormac Herley for his thought-provoking paper and e-mail conversation.

About

Information is my field...Writing is my passion...Coupling the two is my mission.

175 comments
boxfiddler
boxfiddler

Or does it fall on deaf ears? I lean to the latter.

Kent Lion
Kent Lion

I wonder if another aspect of the current IT security environment is examined by Dr. Herley. My McAfee Internet Security does the following: 1. If I don't use a machine for over a month, the first time I turn it on, instead of immediately updating itself (as it normally does), it tells me my system is at risk until I take the time to go in and tell it to update itself. 2. Same scenario; even if I ran a full system scan the day I last used the machine, my system is now at risk, and must be scanned (at considerable cost). I wonder how they imagine something got on my machine during the time it was turned off. 3. Even if McAfee is set to scan everything that is written to disk, including recursive scanning in compressed files and protection using heuristics, a startup scan is always required. Are they telling me they don't have as much confidence in the ability of their software to catch something being inserted into my system, as in the ability of their software to find something that's already there? Perhaps, but why would I have difficulty believing it? 4. I've used other security suites, and they've been no different. One and all, they appear to use more of my computer's CPU resources than anything I actually do on the machine; so it appears I'm constantly be encouraged to buy a faster computer so I can run security software.

rsantuci
rsantuci

I think some of it can be overkill - especially when some pundits espouse only doing your banking on a dedicated PC. The only way to be 99% safe is to not use the internet, shred every piece of identifiable trash and use a PO Box. All we can do is keep preaching in the hope that we are getting through to some degree.

dark_angel_6
dark_angel_6

My personal opinion for a while now has been that if users are rejecting security advice, then the advice needs to be revised and explained to users in a way that they can understand or relate to. I would be the first to admit that if our network was compromised due to inadequate user-level security, that would be entirely my fault as administrator as I am the one being paid to ensure security, not the user. If I as an administrator can not get the average user to understand the importance of these security measures then either: 1. I have failed to properly explain the situation. 2. My advice is misguided and I need to re-evaluate the necessity of such advice. 3. I have over-estimated the users level of understanding. And number 3 on that list is in no way meant to make the user sound incompetent. I work in a laboratory, and I wouldn't have a clue what half of the people working here are talking about as I am not a chemist. I don't expect any of the chemists or lab techs to understand every thing I do.

ContactChrisDirect
ContactChrisDirect

My first 'have to write some thing' after reading this article and its comments, and the base paper written last year, is to ask, "What has become of the 'writing secure software would have fixed all of this years ago' thinking? The base paper was written by a person with Microsoft directives, so, this was my first thought.

Deadly Ernest
Deadly Ernest

if we could educate the companies making the security systems to write nice tight code with good security built into the kernel and the OS from scratch. but what's the chances of that? Even less than getting users to do heavy security work.

Deadly Ernest
Deadly Ernest

cost (effort and money) against return - with security it includes a risk threat against cost of defence component. be it a business or a home user, they will weight the dollars and effort against the probability of getting hit and the damage that could be done if hit. For many people, the highest benefit is to take the risk as it takes up too much time otherwise. I know people who build their home system, ghost the ready to run image on an external hard drive, then run it without all the protections. When it get's trashed by malware, they just stick the ghosting software DVD in and reimage the system. Doing that once a month is easier on them than trying to manage all the security stuff they'd need. They used to rebuild the system about once every six weeks, since they switch to a Unix operating system (SuSe) they haven't had to do it yet, and they switched fourteen months ago.

frylock
frylock

When is the last time you got a signed email from your financial institution? How many companies bother to even take the basest anti-phishing steps, like DKIM?

Neon Samurai
Neon Samurai

Ideally, password time to live should be based on how long it takes to break it. If your complex password's hash takes a month to break then you should change it monthly. The idea being that someone getting a hold of the hash value and breaking it results in a password you've already changed by default. Of course, this is not feasible for most users so the 90 day four changes per year remains the average where I've been able to inquire. Three strikes protects from guessing against the login prompt. Lenth and complexity protects against rainbow table attacks on the hash value. Time to live protects against bruteforce against the hash. Avoiding single sign-on protects against a single breach providing access to everything available to the user account. Sadly, passwords remain the most popular metric for one reason or another. When one can use something better, they still need the password as a backup option so the weakest link remains in place. Certificate errors is another point of grief for me. To get a strong certificate you have to approach a trusted third party and pay a substantial fee along with the checks that the third party will do to verify that your a valid customer. Weak certificates signed by a third party are easily broken by mitm methods. Self signed certificates can be strong but lack the third party verification so they throw up a browser warning and may have chaining vulnerabilities. I personally make a point of only ever temporarily accepting a certificate exception but an average user is not going to take five seconds to add the exception for each new session/visit to the domain. Involving a third party in a two party communication is also never the better approach but it's the best we have at present it seems. Now to go let the article digest a little.

Kim SJ
Kim SJ

I totally agree with everything Dr. Herley says. And it should be a wake-up call for security experts everywhere. It is high time that we stopped "blaming" users for security lapses, and fixed the underlying problems. Let's start by building and installing a replacement infrastructure for sendmail which actually authenticates the origination of email!!

pgit
pgit

This is good info, so far as abstraction will take you. Unfortunately none of my users are abstractions. It usually goes like this: I install and enable noscript in firefox, and inform everyone what it is and how to use it. (which 90% of the time is "ignore it") Every one of them considers it to be a burden, that they can get what they need while ignoring it 90% of the time notwithstanding. 80% of the users eventually disable it or use IE or some other browser. 5% of that 80% get horrifically infected, bringing all work in the (small) office to a screeching halt. They use firefox and happily acknowledge noscript from that point forward. Nothing you can do about human nature. And it would be unprofessional of me to stop "harping" on this, because every one of those infected 5% of the 80% heard the implied "told you so" loud and clear. I Thankfully I don't deal with any monoliths out there that have blanket password policies or other of the matters this fellow actually measured. (in fact at the only place I came by that had passwd expirations set by a previous tech, I was informed that they always just reset the same password! I disabled the monthly prompts...) Great food for thought as usual, Mr. K. You never fail to be pertinent... even if it's to give me a "there but by the Grace of God go I!" moment. =D

rufusion
rufusion

Retiring obsolete advice is a good idea. This makes sense to me: If you have an account lockout function after X incorrect attempts, you don't need superstrong passwords (you just need to block the weakest ones, like "password"). But password strength rules, like many outdated security requirements, are written into our contractual and regulatory obligations. At the point where "security advice" becomes "security mandate," and then on to "technically enforced requirement" (to show that you're meeting the mandate), it is no longer a matter of education. And with security pundits constantly saying, "Security requirements don't go far enough," good luck getting regulators to *revoke* security requirements, even obsolete ones.

mhbowman
mhbowman

If they are at work, No. The systems and the data on them belong to the employer and they are obligated as a result. We have policies in place to patch systems after testing, forced password changes, and proof of ID before resetting them. If at any point antivirus is removed from your pc restrictions prevent it from joining the domain in the first place. If you're on your PC at home and you choose to forego all security, that's your business. I just hope you aren't stupid enough to have any important personal info your computer as well.

jmarkovic32
jmarkovic32

Have you noticed how everytime something bad happens when people die or get hurt physically or financially, the government always swoops in with draconian measures to prevent the problem from happening again or to reimburse those who lost something? There are two types of people in that situation: 1)Victims who receive a one-time benefit from such measures and 2)Non-victims whose lives are now burdened by the new measures. Most security measures (mandated by government) are reactionary, formulated almost overnight and not well thought out. They come from the "Well, we have to do SOMETHING!" mindset after a disaster or breach. They almost always end up solving only one problem, whilst creating a host of other problems.

JCitizen
JCitizen

it is both, until they get hit straight between the eyes with a cannon ball. Then they finally wake up! At my last contract we had a wonderful lady area manager, that would have made a damn good colonel in the Army, that had a way of relating the need for security to our fellow workers. Besides the HIPAA thing staring us in the face, we had excellent leadership and fantastic training with excellent follow up. I don't think I will ever get another opportunity to work at a better organization. I was laid off in 2005 and went into business for myself, but they never replaced me. Money is just too tight.

JCitizen
JCitizen

and scanning may soon be obsolete. I never have to scan when using Avast. It ALWAYS kills the malware as soon as it makes a move. Now there have been AV/AS aware malware that have remained dormant - waiting for a vulnerability or opportunity to strike, but those stay in the temp files and are removed by the next CCleaner scan. NIS 2010 is the only "suite" I've tested that came even close to filling the job. But as far as I'm concerned the suite model is failed and obsolete too. In my honeypot lab I have proven to myself time and again that standalone in-depth defense is one of the only strategies a home consumer can rely on. Enterprise is a different animal, but we didn't rely on one factor there either.

Michael Kassner
Michael Kassner

I like your attitude, it is respectful and user-centric.

RU_Trustified
RU_Trustified

While there are some improvements in secure coding, bugs will never be eliminated and there are billions of lines of legacy code, many from in-house applications that don't have vendor support. Adrian Lane (Securosis) commented last week that it would take large enterprises literally years to re-write their application code. This is a huge problem.

RU_Trustified
RU_Trustified

Again, what is the incentive for them to do that? If one did it, there would be incentive for the others, or they would just give it up. You are correct though, that networks of high assurance systems would make security much easier.

JCitizen
JCitizen

my financial institution contacts me by email. It is against policy, for good reason. They will call or send snail mail though. Fortunately they never ask for personal information on those calls, and never include any data other than simple alerts, questions, or offers and services information. You point is well taken though, and very true!

tracy.walters
tracy.walters

It's more than just time to break...that is one criteria, but users also share passwords, use them to log in to multiple systems, etc. If a user provides the same same password for multiple disparate systems, someone from one of the systems may have the ability to use it on another system...for instance, would you trust your Yahoo mail password to be the password for the login account to your bank? Not a good idea. There will always come a time in an office when there is some kind of an emergency and a user shares their password with a coworker. If that is the same password they use to get on the bank, their personal email, their home computer, the computer at the second job, etc., all those things are compromised.

Michael Kassner
Michael Kassner

Is that you are spending an inordinate amount of effort on certificate errors, when the virtually nothing bad results from them. The doctor's point. The bad guys are going to have good certs or not use https.

Eoghan
Eoghan

The US Postal Service is broke. For years it has been tossed around that the USPS should become the top-level CA for the US, every other country could do the same. Then users could pay a small fee to 1. obtain a cert, send digitally signed or encrypted mail, 2. via the post offices, 3. with a marque attached to the mail by the post office, 4. oh, why don't we call it a stamp? Think of the numbers of email traversing the internet daily. We could all spend 2 to 4 cents per message, save the Postal Service and get better authentication and integrity of our email. Oh, bulk email? No more special prices for them, let them pay 10 cents per address. I'm guessing there wouldn't be a lot of spammers mailing out to millions of addressees at at time if they did that.

Michael Kassner
Michael Kassner

Gotta love your perspective. It has been there done that all over it and why I listen. Your comments actually align with what Dr. Herley is saying the problem is. We just try to reduce risk, not create security.

Michael Kassner
Michael Kassner

I really did not progress that far in possible issues. I still subscribe that they are only trying to reduce risk and not create actual user security.

Michael Kassner
Michael Kassner

Question, how often do you require users to change their passwords?

bharrington
bharrington

I completely agree with your assesment, Arsynic. The biggest problem that I notice in most security counter-measures, because that is all they really are, is a lack of root-cause focus. We focus on the outcome of security problems, not what allows them to occur. A series of band-aids is by no means a wall, and will fail eventually. Thinking of solutions requires going deep... and the security firms just don't do that.

Ocie3
Ocie3

an "in-house application" is one that has been developed and maintained by the person or organization that uses it, so there is no "vendor" to support it. If the developer licenses other parties to use that application too, then the developer becomes the "vendor".

Deadly Ernest
Deadly Ernest

and done nothing in that time. They plan a new release of Windows every five year, so they do have time to work on it.

Deadly Ernest
Deadly Ernest

being pushed by Microsoft is NOT the development of secure software, but the development of a system where you can identify the person and physical location of the machine sending each transmission on the Internet. The intent being to stop distributors of malware by knowing exactly who they are and where they are. Mind you, they've not yet addressed how they'll be able to stop good hackers from spoofing the IDs, and those are the main people who the Trusted Computing concept is advertised are dealing with. IMHO is that Trusted Computing, as pushed by Microsoft, has absolutely nothing to do with identifying bad guys, and is all to do with vendor lock in to Microsoft products. Once Microsoft get around to writing a new version of Windows from scratch, more than 90% of all existing vulnerabilities will vanish over night. If they also write it to use the industry standard command set, there goes about 5% of known vulnerabilities. Then, if they do the right thing about NOT putting in back doors to allow Microsoft programs to short cut kernel access security systems, we would have a total of 99.99999% of known vulnerabilities and exploits removed.

frylock
frylock

so I did a little survey, just checking some emails for DKIM signatures. Signed: facebook.com classmates.com proflowers.com Unsigned: Several major banks. Wow. Now, I know DKIM hardly solves everything, it's only a small piece of a layered security approach. But it provides *something*, and it's an even smaller (almost trivial) effort to implement. Oh, and the account login form for one of those banks is on a non-SSL main page (you get SSL after login). The login form does have a little static GIF of a padlock icon though, so it must be secure :)

Neon Samurai
Neon Samurai

hehe.. sharing passwords is also a user side problem but that's one we can't apply a technological solution to like we can with the other metrics. For using different passwords at logins, it goes back to a good password manager. It's still a user side issue as we can't verify that they've used a different password at systems outside our control. It's the single-signon issue extended to every site they use a common password on. Pen-testers and criminals thank them for it. :D

Neon Samurai
Neon Samurai

Also, sites change. I do the same with noscript; always temporary even for sites I visit daily. java certs when I visit secunia or some such similar; temporary also. It's rare that I have my browser take a permanent exception setting. But, for me that's part of it all as I'm a security geek. It's the ideal versus what most users are going to do because they perceive no value in it even if there is some.

andrejakostic
andrejakostic

In my country, postal service is already offering digital certificates for legally binding signatures, but no one uses them in e-mails. The problems is same: Why should I trouble myself by obtaining the signature?

RU_Trustified
RU_Trustified

I don't think it is possible to reduce risk in a sustained and meaningful way as long as we are using low assurance systems. The bad guys will always be able to work around band-aid fixes. The big problem with risk mitigation is that users are left ultimately to trust untrustworthy systems, and since it only takes one vuln....

dirtylaundry
dirtylaundry

You keep repeating this, but reducing risk IS security. Short of staring at the firewall to be sure nothing bad is getting thru 24/7, there is only so much security you can implement. It is a matter of degrees. Reduce the risks = make it harder for predators. Educate users. Many companies merely hand employers a TO DO and NOT TO DO list w regards to their work computers. Yet, they fail to have weekly meetings or conferences set aside to explain and show real-time reasons for those lists. Educating not just lecturing. Interactive explanations not just warnings. Many employees have shares (stockholders) or have some kind of investment in their companies and explaining how it affects their wallets directly if they fail in following the lists of reduced risk security will most likely have a stronger impact. Far-fetched consideration: Docking their pay for every time they fail to follow security protocol would also have an impact. However, if after following every single security mandate and the systems somehow still manage to get broken into...that would be another topic for discussion since I don't think that has ever happened (everyone following the mandates, I mean).

Ocie3
Ocie3

"actual user security"? To begin with, you cannot ensure that any computer [i]as a physical entity[/i] will always function reliably, if only because even the simplest electronic components will not function indefinitely. Malfunctions can and will affect data integrity. How can you have "actual user security" while using an inherently insecure device? Steve Gibson has alleged that "secure" operating systems have been developed, but few (if any) have been adopted. However, if memory serves, he has yet to identify any of them.

Michael Kassner
Michael Kassner

That was my conclusion after reading the paper. The approach right now is to reduce risk not insure security.

JCitizen
JCitizen

to check the certificate, if it pops up red, I copy the URL into the check site at networking4all.com in English to see what is wrong with it, before I make a decision, about. Now that I've had an account broken into, I will probably never go to those site; just call my order in.

shardeth-15902278
shardeth-15902278

It needs to be cheap, it needs to be easy, and it needs to be universal. Probably something based on open-source concepts (ie complete transparency in process and code). Something that is trivial to interface into for confirmation purposes, and devoid of bells and whistles, which would add cost. Most labor and physical resources would come from volunteers (again to keep cost low), save for a very small number of reasonably paid officers - selected by the community - who would be responsible for the necessary gate-keepery. These individuals would of necessity be a team (to provide checks), of high integrity (reputed by the community), subject to having their lives and actions monitored more closely than Michael Jackson's, and subject to execution - or something very close to - should they ever demonstrate the slightest lapse in integrity.

Michael Kassner
Michael Kassner

The whole idea of the report and my article was to step away from current attitudes. That is what the NSPW's charter mandated. We all know what exists today is not working. So band aids aren't going to help. Besides, the bad guys hope we stay with status quo. Now if you want to debate risk versus security, we need to define the terms. Here are some: security : freedom from danger, risk, etc.; safety risk: exposure to the chance of injury or loss; a hazard or dangerous chance The operative words are "freedom from risk" not "reduce risk".

Ocie3
Ocie3

should know the probabilities, that you mention in your remarks, by now!! If we do not, then it has not been for lack of opportunity to find out, has it? Is no one collecting any data that shows, for example, a statistically significant sample of accounts to which access is controlled by individual, unique passwords, and the number of those accounts that have had their passwords cracked? Perhaps there have not been any reports recently, but I can recall reading research reports about that when [i]computer science[/i] was in its early days. That is also when the rules for "strong passwords" were developed! Sometimes it amazes me that we are still engaged in the same debates, especially about passwords, as we were 40 years ago. Maybe there really [i]is[/i] "nothing new under the sun"! You don't have to be a mathematician to recognize that a password that has at least eight characters and (1) includes at least one upper-case letter, one lower-case letter, one digit and one other character that is not a letter, digit or a blank-space, and (2) is not a word in "any" language, [b]will[/b], on the face of it, quite likely require much more time and computing power for a brute-force attack to compose it, than the time and resources that are required to crack a password that is six characters long and contains only upper-case or lower-case letters, or only digits. The goal is to make the cost of cracking the password greater than the benefit, of course, not to make a password that is unbreakable by any means. However, if you employ a Yubikey (http://www.yubico.com/products/yubikey/) then it is feasible to have a password that is 41 characters long. In fact, many difficult aspects of security are feasible with a Yubikey. :-) Nonetheless, passwords can be compromised by keystroke loggers, by gaining unauthorized access to data files that contain them, and by phishing. I cannot think of any defense that cannot be overcome in some way or another, whether the attack must include a betrayal of trust. So maybe we do need to adopt something else for access control instead of passwords. But it would have to be something with a lower risk of compromise, and it is not likely to offer "security". If you do not know the probabilities, then you cannot identify a security measure for which the probability of compromise is zero.

RU_Trustified
RU_Trustified

is not likely possible, but what should be striven for is knowing when and how much systems can be trusted. We don't even really know that right now, as long as metrics are based on probabilities. As long as we base the security model on patching inherently insecure systems by design, there is always some probability of some unknown vulnerability, potential exploit and risk.

CHerley
CHerley

This is the point that Michael was highlighting from the article. If we could quantify what users get in exchange for their effort the sell might be easier. How big is the risk of having a pwd brute-froced if you have a 6 character simple pwd? We don't know. How much does it improve if you go to 8 characters strong? We also don't know. Thus we are asking a certain cost (effort) of the user, in exchange for reduction of an unknown risk, by an unknown amount.

CG IT
CG IT

the illusion [or if one wants to define it that way, the lie] that their product will make computers connected to the world wide public network, secure. Users believe the advertising and marketing the security software makers use to peddle their products. So they buy it with the notion their secure. When they become victim of risky behavior, they blame the security software mfg not themselves and want compensation for having "lost" something, be it their money, their information, their computer that now doesn't perform well. While we educate users of the risks, we have to realize that there is a payoff for the risk takers. Free movies, free music, free software, or "cool factor" that they want but don't want to spend their money to get it legitimately or not appear "cool" to peers. These risk takers usually follow the Parento principle. 80% of the problem is caused by 20% of the people. So the risk takers who habitually conduct risky internet behavior constitute approximately 20% of the internet users. It's those 80% of the users we need, as computer security folks, to communicate with from their perspective. The criminals know this as well. That's why they try to spoof "well known sites" or "trusted sites". The majority of internet users don't go to risky sites therefore for criminals to get the maximum yield on steal other peoples "stuff", they go where the majority go. That's why Microsoft operating software appears to have so many security vunderabilities. 90% of the people in the world use it, so criminals go where the can get the most profit. While all the software mfgs, the internet service providers, online retailers spout and peddle security to consumers so they will buy the products or use the services, they don't want IT security people to sound the alarm or educate consumers that the internet or I like to use the term "world wide public network" is risky because the criminals are always working to find ways to steal your money and they can be a world away and do it.

Ocie3
Ocie3

unauthorized access to a computer system is reduced from 0.1 to 0.00001 then it seems to me that everyone would accept that as "secure" as it can be made in that regard. As I have stated in another post(s), I believe that security is an illusion. The only thing that is possible is a decreased probability that an undesirable event will occur. At some point, though, the cost of reducing the probability exceeds the benefit, irrespective of how cost and benefit are evaluated.

tracy.walters
tracy.walters

Michael, I seriously have to disagree with you on this. There is no such thing as freedom from risk...it is all shades of grey, not black and white. There is no situation anywhere that does not have some element of risk all of the time.

Michael Kassner
Michael Kassner

So, the debate cannot go on as we do not agree on the terms. It also seems you are saying freedom from risk is not possible. I don't think that way. I don't think that Dr. Herley feels that way either. I feel that if we don't venture outside norm, nothing will change for sure.

Eoghan
Eoghan

All of us security geeks know that answer, a perfectly secure computer is the one that is turned off, off the network and power lines, boxed up and stored away in a vault. Change any of those factors and you raise the risk. Security is the mitigation, not elimination of risk. If a resource is unusable (like the secure computer above), what is the point? After all, most of us wake up daily, secure under the covers, in some sort of resting place. just the act of changing our orientation (usually from horizontal to vertical) is a risk, a lot of people die from stroke getting out of bed. So life is a risk, we all have to define what level of risk is acceptable to us, and secure ourselves from the unacceptable risks.

Michael Kassner
Michael Kassner

It is a good debate actually. Security is defined typically as freedom from risk. Not reduced risk. Big difference in my world. I agree, as you seem to want knew thinking brought into the mix. That was the whole point of the report.

Eoghan
Eoghan

Sure, I know, CIA blah blah blah. But the truth is, reduction of risk IS security. Even the most secure crypto will be broken, given time. For a particular instant in time it is unbreakable and secure, I really wanted to comment about the human aspect though. The problem is that we are still in the stone age of computing. We have a nice tool that has a history of becoming smaller and faster, but we haven't made that much progress. I want the computer that I talk to. I want HAL to tell me (my friend actually) "I'm afraid I can't do that Dave". I have to login, why? Where are these wonder computers that can sense my height, weight, blood pressure, retinal design, capillaries in my face, my pheremones and THEN log me in, without me telling it. When we have that, we will have security. But its not going to come from hand geometry, passwords, pins, tokens,etc. It is going to need to be able to perform a quick airport type scan of you and determine it is you. Certificates from my friends for me to mail securely to, will need to be able to lock my mail to them, likewise. Come on all you rebounders, X gens, Y gens and whatevers, get off your butts, stop designing toys, cartoon movies and games and give me something useful. Give me security by just being me. Until then I hope you all have to drive around on round stones and wooden cart wheels, you are keeping us in that age with our computers.