A Rocky Mountain Bank employee accidentally sent an email containing sensitive information to the wrong Gmail address. This very nearly resulted in disclosure of a customer's private data to the wrong person. Luckily for them, perhaps, a court order was issued instructing Google to shut down the recipient's account and delete the mis-sent email, preventing the information from ever being read by the wrong person. Unfortunately, an entirely innocent third party -- the person whose email address was shut down -- was subject to some serious inconvenience for the sake of cleaning up the bank's mess. You can read about the affair in Elinor Mills' CNET article, Misfired e-mail was never viewed by Gmail user.
None of this had to happen. The Wyoming-based bank's data security policies could have prevented the problem, if not for the fact that those policies are about as ineffective as every other bank's policies. The question that naturally arises is asked in the article Why is bank security so far behind the curve? The article expands upon that question:
Why do banks, utility companies, and other organizations that offer online services that deal in sensitive data not have adequate security policies in place to guard against these problems? Why do they not at least allow users to opt-in for secure messaging via standardized privacy technologies? Why does the Chase Online site for JP Morgan Chase Bank not allow its customers to limit communications containing sensitive information so that they will only occur over secure channels?
When specifying your own data privacy policies, particularly as they apply to communicating with customers and clients outside your organization, the ideal policy should include the following characteristics:
- Use cryptographic digital signatures for all communications so they can be verified, even if they do not contain sensitive data. Public key encryption protocols such as OpenPGP are perfect for this.In the case of a major bank, this ensures that emails received by customers and clients with the ability to verify digital signatures will know the difference between a legitimate email and a phishing email.
- Use open standard encryption protocols to encrypt all communications that contain sensitive data. Again, public key encryption protocols such as OpenPGP are perfect for this.Encrypted emails will protect customers and clients from both eavesdropping, such as man-in-the-middle attacks, and packet sniffing on local networks and accidental leaks, such as the Rocky Mountain Bank employee's missent email -- because, even if the email is sent to the wrong place, the unintended recipient won't be able to read the email.
- Require customers or clients to use digital signatures and encryption as above, when possible, and out-of-band verification otherwise, before authorizing any control over their accounts.A second authentication factor -- out-of-band communications -- such as a telephone call or a cryptographic digital signature will serve to improve verifiability of the identity of a customer or client sending any information or instructions.
- Where digital signatures and encryption in email are not practical, do not send email. Use alternate secure channels, such as TLS-encrypted Web sites, instead. Snailmail does not qualify as a secure channel.Unfortunately, the answer to Elinor Mills' question, "What recourse would the bank have if the data had been sent via regular mail to the wrong address?" is the same as it ever was. The bank's recourse for misdelivered bank statements, credit and debit cards, and other sensitive communications when they are sent via the U.S. Postal Service has always been to pretend there is not a problem. You have probably received others' bank and utility bill mailings in the past, in fact, and known that nothing would be done about it if you did not send the mail back to the source.
- Where a customer or client absolutely demands that security features be turned off or avoided for the sake of "convenience", make security the default, and only allow an opt-out option for downgrading security.It is of course true that many customers and clients of many organizations would never stand for the sort of "inconvenience" represented by the need to employ secure communications technologies. Let them opt out. Make it easy, in fact, but make it a conscious decision on that person's part, with a disclaimer attached detailing in broad strokes the kinds of security problems that could result. Then, let them live with their choice. Let the rest of us, who are willing and able to make use of the security technologies currently available, reap the benefits.
Given enough time, with enough organizations taking that approach, I think the default will begin to swing the other way. Ease-of-use improvements for privacy technologies, and ease of access, will both improve as more and more people use them -- because more and more organizations make it possible to use them when dealing with those organizations. Of course, this relies on the possibility of organizations actually caring enough about the privacy and security of their customers and clients to do something about it.
Even if they don't, you should. If you are in a position to offer your customers and clients a secure way to communicate, do so. If they turn you down, so be it, but if they take you up on the offer you will have helped to make more people safe. More to the point, you may become part of the long-term solution to the ubiquity of unsecure communication on the Web.
You know what they say about people who aren't part of the solution. . . .
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.