Enterprise Software

What were they thinking? Security design without the user in mind

What responsibility do security vendors and government agencies have to deliver or mandate secure products and services? I found myself asking this question repeatedly last week, as two incidents occurred which prompted a 'what were they thinking' response.

What responsibility do security vendors and government agencies have to deliver or mandate secure products and services?  I found myself asking this question repeatedly last week, as two incidents occurred which prompted a ‘what were they thinking' response.

Early in the week, I opened a device sent to me by a vendor.  They had asked me to review it.  It allows a user to securely store login IDs and passwords on a device a little larger than a credit card.  Conceptually, it's a good idea.  However, it has some major flaws.

First, the device uses a membrane keypad laid out like a standard cell phone, with multiple letters on each button.  And like a cell phone, it typically requires multiple presses of a key to get to the letter you want.  This might not be so bad except that the key pad frequently doesn't respond or responds too quickly.  It took far too long, and caused far too much frustration, to enter login information this way. 

It would be much easier to manage my account information on my PC, and then upload it to the device.  But that is not an option.  No USB port or any other interface exists to connect the device to a PC.  Not only does this prevent easy management of information, it also prevents me from copying my device-stored information to my PC as a backup.  If I lost the device, and I hadn't updated both the device and my PC repository when I added or changed an account or password, I'd just be out of luck.

Finally, I called the vendor rep to ask how the data was encrypted on the device.  I thought this was a pretty simple question, but I received back a request for clarification.  So I sent a final request in which I wrote, "All I need to know is how the data is actually encrypted on the on-board storage."  I didn't think this question was too technical for a vendor rep marketing a security device.  But the response I got back caused me to throw up my hands and toss the whole thing: "I'm sorry but that strikes me as a very general question and I don't know how to answer it. Could you frame it more specifically or give me an example of what you mean?"

The problems with this $29.95 product are in the design.  The difficulty in using it encourages users to assign short passwords and then never change them.  At least, that would be my approach if I was a regular user.  Instead, I simply threw the device in a box with other stuff I'll never use. 

Later in the week, I attended a meeting to discuss security challenges surrounding a new requirement imposed on health care companies by CMS (Centers for Medicate and Medicaid Services, reporting to the U.S. Department of Health and Human Services). 

Before continuing this tale, it's important to understand that CMS is responsible for ensuring covered entities (e.g., health insurance companies and health care providers) protect electronic protected health information (ePHI) in accordance with the HIPAA (Health Insurance Portability and Accountability Act).

In the past, long term health care providers could submit patient MDSs (minimum data sheets) to CMS via dial-up connections.  This allowed providers to transmit the MDSs (containing loads of ePHI) via a facility workstation but leaving the MDS transmission files on protected servers in secure data centers.  However, early next year CMS will require these same providers to move to a different transmission method.  The new method is ostensibly better.  It provides data transfer over high-speed broadband connections.  The problem lies in how CMS decided to deploy the solution.

When software residing on the provider's desktop, supplied by CMS, initiates a connection to begin data transfer, all connections to the provider's network are terminated.  CMS representatives say this is to protect their network from bad stuff on provider networks.  OK.  I get it.  But there is a problem with this.  Our ePHI, formerly residing in a safe location, must now be transferred to one or more desktop systems in hundreds of facilities, since each facility must submit its own MDSs.  The CMS, the agency responsible for making sure providers protect patient information, is forcing providers to circumvent or weaken existing security controls.

To be fair, a CMS contact told us the agency plans to resolve this sometime next year.  But the fix will come long after we roll this out to over 500 facilities.

Maybe I'm missing something, but in both cases described above the product or service provider should have put themselves in the seat occupied by the customer.  Asking some very simple questions while sitting there would have revealed problems with their offerings.

Luckily, I can simply ignore the password tool.  Unfortunately, there's no workaround for CMS—another fine example of our government at work.

Tell us what you think

About

Tom is a security researcher for the InfoSec Institute and an IT professional with over 30 years of experience. He has written three books, Just Enough Security, Microsoft Virtualization, and Enterprise Security: A Practitioner's Guide (to be publish...

Editor's Picks