There are direct conflicts of interest between a technology corporation's responsibility to its shareholders and the ethical responsibility to its customers' security. Ignore them at your peril.
Corporate responsibility is a term often used to refer to the legal mandates and business priorities of decision makers within public corporations. This particular breed of responsibility, as a domain specific system of ethics, is quite clearly divorced from other ethical systems in the general case. There are both theoretical and practical limitations on the liability that apply to any individual decision maker for, or agent of, a public corporation whenever such liability might interfere with the ability of that decision maker or agent to directly serve corporate responsibility, specifically because of that domain specific system of ethics.
In short, Corporate Responsibility mandates that all decision makers and agents within public corporations must assume as their first priority "the success of the corporation as an investment of the shareholders' resources." Such decision makers and agents have "an ethical mandate to serve that end, and anything that stands in the way of that end is secondary at best. Period."
Contrast this reality of corporate responsibility with what we wish to be true when we make optimistic statements about the honesty of those acting on behalf of corporations that provide us with our software and services. Was the operating system running on the computer you use to read this article developed under the auspices of an organization whose first priority is to serve the interests of its users, or is that organization's first priority the metrics used to convince shareholders that they're getting a good return on their investments?
When it comes to software and service security, this can manifest in manifold ways, of which the following three (not entirely hypothetical) examples comprise just the tip of the iceberg.
Which do you think is easier to accomplish with an absolute minimum of expense when government law enforcement agencies make requests for customer data on a regular basis?
- Check the source of a request, determine the legal requirements for such a request (subpoena, warrant, et cetera), have a lawyer examine the paperwork in detail, enumerate options for legal compliance with minimal exposure of private customer data, and finally proceed in a manner calculated to facilitate legal searches while barring access to any data that is not legally required in satisfaction of the law enforcement request so that the corporation's customers are protected.
- Create an automated law enforcement data access portal, then effectively ignore it, simply hoping that if there are any breaches of trust nobody will notice.
Which do you think is more important when testing the security of a piece of software or a service intended to be employed by millions of end-users to manage their finances?
- Thoroughly test the software or service. When you are done, conduct in-depth focus group testing. When finished with that, offer closed beta testing. Finally, use open beta testing to work the final bugs out of the system over a period of time substantial enough to minimize as much as reasonably possible the remaining bug count. Only when this is done do you consider the software or service ready for prime time.
- Test it enough so that you think any remaining bugs probably won't damage the company's reputation over more than a very short-term period, then conduct any further testing by basically unleashing the bug-laden software on the open market and adopt a secrecy-based vulnerability management policy (see below) to try to cover up the fact that you've foisted essentially unfinished, unsecured software on an unsuspecting customer base.
C. Vulnerability management
Which do you think is more important when a vulnerability is discovered in a desktop application or Web service, and it becomes known that it is being exploited by security crackers in a way that is not obvious to end users?
- Ensure that the security vulnerability is fixed quickly, and that users are informed of the vulnerability so that they can protect themselves with work-arounds or suspension of their use of the software or service.
- Keep users in the dark and the existence of the vulnerability hushed up as much as possible so that public confidence in the corporation and its products and services will not be damaged.
It is quite likely that your answers to these questions as an end-user and the answers of the CEO of a public corporation offering software or Web services to customers will not be the same answers. The sad fact is that this is not a matter of a few rotten apples giving the rest of the corporate world a bad name. Rather, it is a matter of conflicting ethical demands — on one hand, a duty to the customer, and on the other, a duty to the shareholder — where the legal enforcement of such ethics almost universally punish giving greater care to the customer's benefit. Depending on how you look at it, you might characterize the dilemma as having no correct answer, or no incorrect answer, or even a "more correct" answer that mandates giving the end-user the shaft more often than not.
This is why there's no such thing as a trusted brand. It is also part of the reason why encryption that doesn't trust the user isn't trustworthy. Perhaps most importantly, it is a demonstration of the reason that we should all learn to fish for ourselves.
Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.