Security

Fake security messages more believable than real warnings research shows

Cambridge University researchers reveal why people believe malicious, fake security messages and ignore real warnings.

How do you react to the following warning when it pops up on your screen?

Security warning 1.png

I have yet to find a person who always obeys the above warning, but the warning below has proven very effective, even though it's a complete fake. Why?

Security warning 2.png

This is a question two University of Cambridge researchers try to answer in their paper, Reading This May Harm Your Computer: The Psychology of Malware Warnings. Professor David Modic and Professor Ross Anderson, authors of the paper, took a long hard look at why computer security warnings are ineffective.

Warning message overload

The professors cite several earlier studies which provide evidence that users are choosing to ignore security warnings. I wrote about one of the cited studies authored by Cormac Herley, where he argues:

  • The sheer volume of security advice is overwhelming.
  • The typical user does not always see the benefit from heeding security advice.
  • The benefit of heeding security advice is speculative.

The Cambridge researchers agree with Herley, mentioning in this blog post:

"We're constantly bombarded with warnings designed to cover someone else's back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?"

I can't think of a better example of what Herley, Anderson, and Modic were referring to than my first example: the "site's security certificate is not trusted" warning.

Warning messages are persuasive

Anderson and Modic also looked at prior research dealing with persuasion psychology, looking for factors that influence decision-making. Coming up with the following determinants:

  • Influence of authority: Warnings are more effective when potential victims believe that they come from a trusted source.
  • Social influence: Individuals will comply if they believe that other members of their community also comply.
  • Risk preferences: People in general tend to act irrationally under risky conditions.

Use what works for the bad guys

In order to find out what users will pay attention to, Anderson and Modic created a survey with warnings that played on different emotions, hoping to see which warnings would have an impact. In an ironic twist, the researchers employed the same psychological factors already proven to work by the bad guys:

"[W]e based our warnings on some of the social psychological factors that have been shown to be effective when used by scammers. The factors which play a role in increasing potential victims' compliance with fraudulent requests also prove effective in warnings."

The warnings used in the survey were broken down into the following types:

  • Control Group: Anti-malware warnings that are currently used in Google Chrome.
  • Authority: The site you were about to visit has been reported and confirmed by our security team to include malware.
  • Social Influence: The site you were about to visit includes software that can damage your computer. The scammers operating this site have been known to operate on individuals from your local area. Some of your friends might have already been scammed. Please, do not continue to this site.
  • Concrete Threat: The site you are about to visit has been confirmed to include software that poses a significant risk to you. It will try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.
  • Vague Threat: We have blocked your access to this page. It is possible the page contains software that may harm your computer. Please close this tab and continue elsewhere.

The research team then enlisted 500 men and women through Amazon Mechanical Turk to participate in the survey, recording how much influence each warning type had on participants.

People respond to clear, authoritative messages

Anderson and Modic expressed surprise that social cues did not have the impact they expected. The warnings that worked the best were specific and concrete. Such as messages declaring that the computer will become infected by malware, or a certain malicious website will steal the user's financial information. Anderson and Modic suggest the software developers who create warnings should heed the following advice:

  • Warning text should include a clear and non-technical description of the possible negative outcome.
  • The warning should be an informed direct message given from a position of authority.
  • The use of coercion (as opposed to persuasion) should be minimized, as it is likely to be counterproductive.

The bottom line according to Anderson and Modic, "Warnings must be fewer, but better." And from what I read in the report, the bad guys are doing a superior job when it comes to warnings, albeit for a different reason.

About

Information is my field...Writing is my passion...Coupling the two is my mission.

12 comments
jimbaran
jimbaran

Typical of MS error messages are not informative nor useful to user who mainly from non security background or interest. Good guy should learn from bad guy .?? :D

Free Webapps
Free Webapps

Just out of curiosity, was the WinXP notification image used for illustration purpose as to why the taskbar icons weren't shown. Typically thats the real give away regardless of what the message says. Then theirs always Taskman and msconfig to see whats actually running and starting.

Gisabun
Gisabun

Then of course you have these so-called security messages where you can tell they are fake because they don't look professional.

eaglewolf
eaglewolf

The first flaw to this study is using 'Amazon Mechanical Turk' people.   They should have recruited the people who are *really* impacted by this type of problem:   Grandpa and Grandma in Kansas, the typical social media user who is constantly being 'trained' to do as they're told - recruited into sheep-dom, the new computer user who doesn't have a clue of what these messages are and are scared of everything, and the 9-12 yr old kid down the street who plays on the computer.

Anything less is biased and incomplete.   You then solicit input from all the people being tested and ask what would be meaningful and helpful for them.   Recreate your survey based on that input and run it again .. and more if needed.

The 'fake' image is so obvious to someone with experience .. but a definite lure to one who isn't.

Yes, obviously psychology plays a large part, but you need a group of researchers that includes those who are computer literate and those with experience working with end users at all levels.

Michael Kassner
Michael Kassner

@Free Webapps 


It was the only for-sure warning that I had in my possession. There are other more recent examples, but they have copyrights. I hope it got the message across. 

HAL 9000
HAL 9000 moderator

Just the bleeding obvious here.

I had to go back and look to see if I had read something that wasn't there but the opening clearly says Cambridge University which from my memory is in the United Kingdom not the USA so I would guess that the researchers didn't want "Average American" just average users.

Though I do agree with your point that the subject pool is biased and the results shouldn't be considered "Definitive."

Col

Michael Kassner
Michael Kassner

@eaglewolf 


Good points. Thank you for sharing them. I am curious as to why Amazon Mechanical Turk was a bad choice? 

eaglewolf
eaglewolf

Hi, Col ...

Point taken.   I should have said 'Grandpa and Grandma out in Falmouth!'   :)    But that's a reference to more 'senion citizens' and there should have been participants in both urban and non-urban settings.  Facebook followers and the 'kid down the street/road/path' apply almost anywhere.

eaglewolf
eaglewolf

@Michael Kassner

Hi, Michael ...

Amazon's Mechanical Turk is a crowdsourcing site that has, among other things, 'qualifications' to participate in certain studies.  I don't think TR likes links, but look up the mturk site on the web.

Check the Wikipedia article, too, which states:

"Social science experiments
Beginning in 2010, numerous researchers have explored the viability of Mechanical Turk to recruit subjects of social-science experiments. In general, researchers found that while the sample of respondents obtained through Mechanical Turk does not perfectly match characteristics of the U.S. population, it doesn't present a wildly inaccurate view either. They determined that the service works best for random population sampling; it is less successful with studies that require more precisely defined populations.[15][16][17][18] Overall, the US MTurk population is mostly female and white, and is somewhat younger and more educated than the US population overall."

Right there, you're away from your target user although it does appear from their paper they did have a 'proficiency' scale.   Not sure what the proficiency was for - I didn't see that stated, but I didn't look thoroughly.

On their website, you can search the 'HITS' .. I'd recommend it.   And I noted your 'reward' for your efforts was getting as high as $0.50.


If you're going to do an 'expert' study and make statements as facts, then put out the effort to properly develop it and have your research base match the study criteria.   That's my 'rub' with it.   And in the initial pages of the report, there was extensive use of old studies - some as far back as the 1950's.   History is nice, but in technology, that's equivalent to prehistoric times.

I'm not saying a study like that would have a lot of value - it would, but would be far more credible if done correctly.


Editor's Picks