It is becoming popular to talk about dealing with people as a factor in security these days. The idea is that social factors contribute as much to security as technological factors. In other words, any security professional who only concerns himself with strictly technical decisions to secure a system, and ignores the social factors of security, is not doing his job.

It can be difficult sometimes to imagine how to account for social factors of security. In general, such factors boil down to matters like business culture and interface design.

A culture with a strong, positive emphasis on security helps people recognize the importance of following good security practices and adhering to policies. Fostering such a culture depends on keeping employees informed about security matters that affect them and that they in turn affect, establishing appropriate levels of trust, and keeping people aware of the importance of security at all times.

Interface design, in this case, refers to more than just the placement of buttons in GUI design. It refers to the way policies, technologies, and procedures are presented to the people who deal with your secure systems and each other. The easier and more natural you can make it feel for users to do the right thing for security, the more likely they are to do it.

What both of these general categories of social factors of security have in common is that they do not just mandate secure practice — they encourage it. When the individual is motivated internally to maintain good security, whether by a deep understanding of specific security procedures and the reasons for them or by secure procedures that seem more natural and intuitive than the alternative (or at least nearly as natural and intuitive), maintaining security is considerably more likely to be successful.

People often bemoan the trade-off between usability and security in software design, as though there is some natural law stating that for every incremental increase in the security of a system there is a matching incremental decrease in the usability of the system. This, however, is not the truism of software design that many people seem to think it is; rather, it is a truism of bad security design.

Often, such bad security design is a result of attempting to bolt security onto an already existing system that, previously, essentially benefited from no security design at all. The end result is not integrated security design, and bolted-on security features aren’t secure. A modern example is MS Windows Vista’s UAC, designed to allow users to log in with an uprivileged account for normal, day-to-day activities, but provide them with a means of performing administrative activities without having to log out of their unprivileged accounts first.

Stories of people growing so frustrated with UAC that they disable it and live with the security consequences of this are a dime a dozen. A Google search for the terms “uac” and “frustrating” should prove instructive for anyone who has not experienced UAC for him or her self. Believe it or not, though, UAC is an improvement. Before MS Windows Vista, one essentially had to log out of the unprivileged account then log in to the administrative account, do what needed to be done, then log out of the administrative account and log back in with the unprivileged account.

MS Windows XP introduced fast user switching, which alleviated some of the difficulty. It allowed the user to switch between accounts without having to shut down and, later, restart all applications used by a given account. Still, the basic inconvenience remained — exiting one user account and entering another, then reversing the process.

This could be especially problematic if you needed to access information or functionality that may not be very secure, such as Websites with information on how to achieve a given task; in other words, activities like Web browsing that are exactly the sort of thing you should do only with an unprivileged account. At that point, you must choose between copying all of the relevant information to some other medium before switching to the administrative account, switching back and forth between accounts constantly, and simply engaging in unsecured activities from the administrative account. The last choice, of course, directly counteracts the security purpose of separating user account privileges.

By contrast, Unix systems have had true privilege separation between accounts for decades. Security in this respect is integrated with the system, and does not suffer the same limitations. From within a login session for an unprivileged user account, one can open arbitrary applications as the root user (the Unix administrative account). These applications coexist within the same user interface environment with applications run by the unprivileged user account, as needed. This means that a browser running with the permissions of an unprivileged user account can exist side-by-side with a system configuration tool running with the permissions of the root user account.

This sort of difference in design is made easy by the fact that privilege separation is an integrated part of the system rather than a bolted-on afterthought, and the result is that users of the system find it much easier to use administrative accounts for a limited set of administrative activities and unprivileged accounts for everything else without compromising security.

That’s only one example of the difference between good and bad interface design in the context of security, but it is an example that should be familiar to most IT professionals. Keep it in mind the next time you design the interface for a security feature of your software, as a manifestation of the fact that interface design is often inseparable from security design.