Security controls are only helpful when the users and the service agree on what they mean. In this case, a financial services company's customers must ignore the fact that the new log-in prompt does exactly what the old system vehemently warned them against.
Verifying a site vs. verifying the purchaser of a certificate
In the past, their login prompt used personalized security images. Customers would select an image from some set of easy to recognize images (a football, a sunset, a butterfly, etc.). The user is supposed to know or remember what they selected and mentally double-check that they always see the same image when they log in. This technique has plenty of problems. Users are often willing to type passwords when the images are not present or even when the image is wrong. What is important here is the question the images attempt to answer: Is this web site I'm talking to now the same one I registered with some time ago? Although a flawed security control, these images are a simple enough concept to explain to anyone.
Not a like-for-like swap
This web site has eliminated personalized security images and bought itself an "Extended Validation (EV) Security Certificate." Moreover, the "click here" link simply links to a Wikipedia article on EV certificates. No bank customer will make sense of that article.
The customer is no longer verifying anything. In fact, a user cannot verify an EV certificate. The user's browser performs some opaque cryptographic checks as always. The extra information simply tells us that a certificate authority, at some ambiguous time in the past, satisfied itself that it was dealing with a legitimate business entity. This is a completely different kind of assurance.
The login page's explanation is weak and makes no sense. Personalized security images attempted to answer the question "is someone fooling me right now?" Customers do not care whether someone fooled the certificate authority at some time in the past.
Rhetorical security controls
Consider how broken personal security images are: the legitimate web site made this switch one day. Suddenly, a web site that previously told users "never type your password unless you see your personal security image" is now telling them the literal opposite: "we have done away with your personal security image, so go ahead and type your password." That is exactly what an attacker would write to trick users. This is the problem with rhetorical security controls - security mechanisms that require a user to understand and buy into a narrative story and play their part in it. When the narrative changes, it's clear that they were only playing along in the first place. The users never truly understood the narrative.
This Google search highlights hundreds of thousands of high-profile, commercial products and services that undermine worthwhile technical security controls by simply encouraging users to ignore warnings. Against this sort of background noise, it is very difficult for security professionals to promote proper and correct security.
Security features that cannot be used correctly by the target audience are not just failing one application, they are failing the whole industry. Badly-designed security features prevent users from making valid security decisions and they perpetuate the stereotype that security is confusing, difficult, and a nuisance.
Moving towards a better narrative
Google is making an interesting decision in this regard. Currently, most web browsers display some kind of notice if a web site tries to use TLS and HTTPS, but fails. That is, if the web site is HTTP and not trying to be secure, there is no warning at all. If the site uses both secure and insecure elements, browsers warn. If the site uses a certificate, but the certificate has problems, browsers warn. But as long as a site makes no attempts at being secure, browsers don't warn at all. Soon, Google Chrome will warn on HTTP web sites just like they warn on sites that try to be secure but fail.
This change helps end users understand the situation. The two kinds of sites (those that try, but fail and those that don't even try) are similarly trustworthy. This change still does not address the question of "is this site trying to fool me right now." There is a big difference between knowing the legal owner of a digital certificate and believing that the web application software is trustworthy.
Paco Hope is a security consultant at Cigital.
Author of the Web Security Testing Cookbook and frequent conference speaker, Paco Hope is a security consultant with Cigital who has been working in the field of software security for almost two decades. Paco helps secure software in the financial, retail, and online gaming industries through security requirements, source code review and architectural risk analysis. He serves as a subject matter expert to (ISC)² for the CISSP and CSSLP certifications. Outside of secure software, he is passionate about privacy, user experiences, and data visualization. Paco fundamentally believes that security is less about wizardry and more about common sense.