Conflicts of interest can be a big problem for security, particularly when you trust an organization to have your own best interests in mind.
The reason why there's no such thing as a trusted brand is:
Corporate leadership changes, as CEOs and board members come and go, as business divisions undergo reorganization, and as legal and financial circumstances lead to changes in corporate policy. The influences on corporate vendor behavior are probably no more numerous than those on individual behavior, but the entity of a corporation is far less resistant to changes in its trustworthiness than any individual.
A trust relationship with an individual is a perfectly normal and healthy thing, and is often a necessary part of life. Where an individual demonstrates a propensity for acting in a particular manner, and a personal relationship with that individual suggests that the reason for that propensity is a matter of integrity, what we perceive is the individual's trustworthiness. Organizations — including nonprofits, profit-seeking corporations, and governments, among other examples — are not subject to the same rules of integrity, because their perceived integrity is entirely dependent upon the integrity and value systems of the people who make up the organization, both as decision makers and as agents of the decision makers.
This is not the whole story, however. To a certain extent, the behavior of an organization can be somewhat predictable, and beneficial decisions about how to deal with such organizations in the short term can be supported by an effective analysis of the factors that influence their behavior.
The simplest keys to understanding such behavior is to consider what the organization's "customers" think and say they want, whether those "customers" involve advocates and patrons, voters, the press, or literal paying customers and potential customers. Principles of economics show us that such organizations are strongly influenced by what customers want and need, as expressed by their buying behavior.
One should not be fooled into thinking that what people say they want or need is necessarily identical to what they demonstrate they want or need. The most successful organizations will be those that serve what their customers demonstrated they want or need, rather than simplistically serving what their customers say they want or need.
Such criteria for the success of an organization gives rise to what we call "conflicts of interest", where the perceived responsibility of an organization to serve the demands of its customer base is in tension with its survival mandate, which requires it to serve the wants and needs demonstrated by its customer base. Even worse, a customer base is not always who we think it is, creating a conflict of interest between the perceived importance of serving what people who purchase widgets from Foo Corporation demand and serving what members of the board of directors demand. In the end, the most important "customers" for a corporation are its shareholders, after all.
Conflicts of interest encourage dishonesty. In order to give people what they demonstrate they really want and need, one must often give them the impression that one is giving them what they say they want and need, even if those two things are substantially different. This is because one of the things people tend to demonstrate they want and need is a belief that they really understand their own wants and needs, even when that belief is at odds with their own behavior. In the short term at least, the simplest way to do this is to give them what they demonstrate they want and need while lying to them about what they say they want and need.
The problem of organizational conflicts of interest is particularly dangerous in the realm of security. It is, in fact, the fundamental reason for "security theater": giving people the impression that you are Doing Something about security when, in fact, the measures you take are egregiously lacking in actual security benefit. The following values of software vendors tend to conflict directly with the organizations' responsibility to serve customers' security needs:
To a non-trivial degreee, simplicity is security. Eliminating complexity eliminates opportunity for unexpected behavior that can compromise the security of the system.
Complexity, on the other hand, is the natural result of adding bells and whistles to a software system. Increased integration of functionality within a single piece of software helps to lock people into a particular product line over the long term, ensuring increased security of a vendor's business model, but it also ensures greater opportunity for security vulnerabilities to arise as the result of unexpected interactions between different parts of the system. Both MS Windows and the X Window System could benefit from simplification, but the needs of the market strongly discourage such simplification.
Improvement in the security of a system begins with the fundamental design of the system, if we ignore for the moment the question of whether the system should be created at all. When discussing software, that fundamental design is often referred to as the software's "architecture." Architectural security concerns involve those design decisions that incorporate security principles into the system's architecture as such basic assumptions about how it operates that such security measures cannot be circumvented short of breaking or replacing the entire system.
Software systems that have been offered on the retail market for years tend to be very difficult to alter on an architectural level. Superficial changes are much easier, and less costly, to make. Because difficulty and cost are antithetical to the goals of a profit-seeking corporation, making significant architectural changes in a software product is generally deprecated in favor of adding features that present the appearance of improved security.
Because most software consumers are unaware of the difference between architectural and superficial security measures — between architectural security and security "features" — it is generally a winning strategy for a successful software vendor to add an authentication nag such as MS Windows Vista's User Account Control rather than to implement architectural privilege separation. The key difference is that if UAC breaks or is "turned off" one can then do whatever one wishes with the system with full administrative access, while breaking or "turning off" the authentication mechanism for a system that offers architectural privilege separation just makes it impossible to accomplish any administrative tasks at all.
For many purposes, privacy is security. Maintaining the security of your data is identical to maintaining its privacy, after all. Anything that compromises privacy offers an opportunity to compromise security.
Software vendors like to learn things about their customers, at least for purposes of better serving the needs of their customers, if not for purely cynical marketing strategies or even less wholesome reasons. Even if we assume for the moment that a software vendor's attempts to harvest information about its customers are for entirely trustworthy reasons, and even if we assume that such information will never be misused, there is still a danger to such information harvesting: accidental disclosure.
The tendency of some operating systems and applications to "phone home" to the vendor with information about the user or the computer on which the software is running offers an opportunity for malicious security crackers to eavesdrop on the communication of such data across the Internet. Even if the information makes the journey safely without being compromised, storage on the vendor's servers creates a one-stop shopping location for malicious security crackers to gain access to the data. Attempts to automate compliance with legal requirements for data disclosure can also offer opportunities for security crackers to gain unauthorized access, as demonstrated when China cracked Google security. Then, of course, once your information is in the hands of the government, it can always be stored on the hard drive of a laptop some bureaucrat loses at a coffee shop, on a CD that is lost in the mail, or on servers that give security crackers yet another one-stop shopping location for personal information.
Any security expert worth his salt can tell you that obscurity is not real security. As Auguste Kerckhoffs taught us in the 19th Century, the design of a system should not require secrecy and compromise of the system should not inconvenience the correspondents. The only part of a security system that should require actual secrecy is the key, in part because it is much easier to ensure the secrecy of a single piece of data than it is to ensure the secrecy of the workings of an entire system — particularly when the system itself is widely distributed, as is the case with a piece of retail software.
The most successful software vendors have a vested interest in making it difficult for others to duplicate the functionality of their software too closely, because this allows less successful software vendors to start eating into their market share. As such, the myth that obscurity is an effective security measure is of great value to successful software vendors. The fact that obscurity can, in fact, hinder security in many cases is an inconvenient detail that must be swept under the rug to keep people from rejecting software products on the basis of their reliance on obscurity for security.
It is of great value to software users to know all the security implications of using a given piece of software. Knowledge of vulnerabilities, for instance, can help users implement work-arounds for those vulnerabilities when needed. The lesson of How should we handle security notifications? is clear: we need to know when we are subject to security vulnerabilities to be able to account for them, and mitigate the effects of those vulnerabilities. It seems only natural that a software vendor should serve our need for security by presenting us with as much information as possible about any vulnerabilities in the software the vendor brings to market.
Hopefully you are not so naive as to believe that is how actual corporate software vendors tend to behave, serving our security needs first and foremost. The truth of the matter is that it is of much more value to a software vendor to try to give its customers the impression that its software is never vulnerable to anything than to give its customers information about vulnerabilities. As a result, policies that involve hiding security vulnerabilities from view — such as "responsible disclosure" requirements for security notifications — arise as a means of pretending to serve security first and foremost while actually serving corporate reputation at the expense of security.
Think for yourself
Before taking anything an organization says at face value, consider the fact that the organization's behavior is likely influenced by innumerable conflicts of interest. A brand, a corporation, a government — any organizational entity — simply cannot be trusted the same way one can trust an individual, and conflicts of interest such as those articulated here are collectively a big part of the reason for that untrustworthiness.
The solution is to think for yourself. Do not let marketing and convention replace your faculty for reason. Consider the potential conflicts of interest and double-check every "fact" presented by an organization.
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.