Data Management

10 ethical issues raised by IT capabilities

Some professions rely on established ethical guidelines to govern behavior, but IT pros are often on their own when it comes to determining what's right and what's fair. This look at the ethical concerns of the IT industry illustrates how the complexity of technology creates dilemmas that seem to spawn more questions every day.

By Jeff Relkin

This article is also available as a PDF download.

Professions such as law and medicine have a codified set of ethics its practitioners are expected to honor. Egregious violations are dealt with in the harshest possible terms, and even minor lapses can result in significant penalties. No such codification exists for IT. We as technology professionals generally abide by personal codes of conduct and are essentially self-policing.

Technology raises complexities that go beyond typical questions of what's right or what's fair. Does the capacity to do something justify the act? When new technology creates new capabilities, do the old rules of behavior still apply? The environment is becoming ever more challenging. Areas such as data access and capture, processing speed, tracking and monitoring, and job redesign are just a few examples of IT capabilities with ethical considerations. There are no easy answers, but as you'll see, there seems to be no shortage of questions.

#1: Who should have access to data?


If systems and infrastructure are the engines of technology, data is the fuel that powers those engines. Access to that data is a richly nuanced issue. When building an HR or payroll system, developers typically need access to live personal data records to fully test their work. What's their responsibility in terms of protecting the confidentiality of that data, records to which they'd ordinarily never have access under any other circumstances? Should support techs have access to confidential personal or corporate information to service the applications that access those databases? If not, can they, in fact, adequately perform their jobs?

On what basis does a system security officer comply with a request for access to a system, especially when it's not entirely clear that the requestor has a legitimate claim? Other than the legal requirements of FOIA (Freedom Of Information Act) requests, do organizations have any responsibility to supply personal data they may have on file to the particular individual who's the subject of that data? Are organizations constrained in any way when cross-referencing internal and external data sources? As technology provides increasing opportunities to house ever larger information stores with expanded cross-referential and analytic capabilities at faster speeds, the ethical considerations surrounding access to that data become correspondingly more complex.

#2: To whom does data belong?


Every time you use a debit or credit card, make an online purchase, access an ATM, or complete virtually any financial transaction, a significant amount of data about you and your activity is recorded. In the simplest application, companies use that data to issue bills, record payments, or update portfolios... basic recordkeeping. Technology has enabled more sophisticated uses of that data. As just one example, online realtime data warehouses using segmentation analysis can swiftly analyze buying patterns and "suggest" additional purchases on the basis of the product you're trying to buy. Is this an ethical use of data? The information being used is all about you, but it was collected by the company with which you were doing business. Is it your data or theirs? If it's their data, do you have the right to tell them how to use it? Not too many of us would have much of a problem with technology enabling the bank to quickly and accurately apply interest to our accounts, but do we have the same attitude when that same bank uses that data for marketing purposes? Where do you draw the line?

#3: Who is responsible for maintaining accuracy and security?


When organizations share information, one of the partners may discover anomalies in the supplied data. After correcting the data, does that entity have an ethical responsibility to pass the edits back to the supplier? Your company's payroll system is operated by the finance department and supported by IT. Which department is accountable for the accuracy of data maintained in that system? A salesperson loses her laptop while on a business trip. Confidential corporate data was on the hard drive, possibly in violation of company policy, and the laptop was not password protected nor encrypted. How do you assess relative responsibility? In many cases involving accuracy and security of data, the lines of ownership and accountability are blurred. Recently, we've read about a number of devastating and potentially catastrophic losses of personal data due to hacking, carelessness, noncompliance with policy, and poor security practices. Organizations need to make clear their data usage and access policies, enforce penalties on violators, and most important, assign specific ownership for ensuring data is accurate and secure. In the final analysis, everyone is responsible.

#4: Does the ability to capture data imply a corresponding responsibility to monitor its use?


Prior to the advent of technology enabling mass capture, storage, and processing of data, maintaining the security of that data and ensuring it wasn't misused was relatively easy. Critical and confidential data was kept on paper, in locked files, in a secure file room, with access controlled by a gatekeeper... a records manager. There weren't multiple copies floating around because you weren't allowed to make copies.

Today, we have terabyte-sized databases that are tabulated and cross-referenced with other megastores to provide all sorts of information about us to all sorts of people, and the access to that data is often viewed with a certain degree of cavalier disinterest. As individuals, we have little to no control over that data. When you apply for a car loan your personal financial data is legitimately provided by the credit reporting agencies to enable your lender to make an appropriate financial decision as to whether you're a good risk. The lending institution subsequently uses that data to market products to you. That wasn't the original intention of the transaction by which the data was supplied, but there's nothing inherently illegal about it. However, is it ethical for that organization to use an asset to which it wouldn't ordinarily have had access and in an entirely different manner than what was agreed by the two parties to begin with? Does this constitute an invasiion of your privacy?

#5: Should data patterns be analyzed to prevent possible risks to employees or customers?


In the not too distant past, you'd go to the toy store to buy your child a set of blocks for Christmas, the manufacturer having already recorded a gross shipment to a supplier and probably not even considering you as an end customer. Today, technology has enabled that manufacturer to collect a broad spectrum of data that can be used for design and marketing decisions based on consumer requirements and preferences. Does the manufacturer have an ethical responsibility to factor into those design considerations the fact that several children were reported injured by putting the smallest blocks in their mouths? Does the supplier, who can also track and analyze this data, have a similar responsibility to force the manufacturer to redesign?

Airlines can collect and cross-reference an enormous amount of data on travelers. Patterns can emerge that could allow them to draw conclusions identifying individuals as possible security risks. Is this profiling, and if so, is it ethically challenged? If airlines do not do this and someone who could have been stopped at the gate boards a flight and hijacks the plane, is the airline ethically responsible for its own inaction?

#6: How much information is necessary and relevant for decision making?


As data collection and processing methods have improved and become more streamlined, we can easily find ourselves inundated with information. We've all experienced "analysis paralysis." The Internet has made the largest repository of knowledge ever amassed in the history of civilization instantly available. If enough information has been provided to make a particular business decision, is it ethical to delay that decision simply because more data is capable of being accessed? At what point do you decide you indeed have enough information to take action? Marketing companies collect enormous quantities of data from various disparate sources to customize individual marketing plans. Is it ethical to use this activity as a rationale for collecting data that may not be immediately useful but may have some future utility? If it turns out that the data can't be used, what if any action should the organization take?

#7: Should certain data "follow" individuals or corporations throughout their lives?


Living in a free society, we can do business with any organization as we see fit—and when conditions change, we're free to take our business elsewhere. Let's say a person has a car insurance policy with Acme Indemnity, and over the course of time has numerous accidents and files a series of claims. The person then applies for a new policy with Midwest Mutual. Does Acme have an ethical obligation to supply information to Midwest that might affect its decision on the conditions for that policy? If not, and if our ability-impaired driver has a serious accident, does Acme bear any responsibility for withholding information that might have prevented new insurance, and possibly even the license to drive, from being issued? Without technological advances in processing high data volumes enabling data about consumers to be easily shared among organizations, it would be difficult if not impossible to build up a comprehensive "life file" about anyone. Does the fact that technology enables this to happen necessarily mean it should?

#8: Do organizations maintaining permanent records have a right to charge for their use?


The primary example that comes to mind in this category is the credit reporting agencies. These organizations amass huge volumes of data on individuals in such areas as credit transactions, bill paying patterns, loan applications, investment portfolios, and asset management. Everything you want to know about anyone's personal financial history is in their files, assuming you have appropriate identifying information and reason for access and that you're willing to pay for the privilege of obtaining it. But should those companies be able to charge a fee for supplying that data?

Data that's about you is yours; you're the one who engaged in the actions that generated those transactions in the first place. Why should some corporate entity who had the means and opportunity to gather it be able to profit from supplying your information to others? They are performing a service, which we all hope is to our own personal benefit, and they are entitled to be compensated for their effort. But they'd have no raw material for their product if it weren't for you and the financial actitvities in which you engage. Should they pay you for the privilege of housing your information before they can charge for its subsequent use?

#9: Are there consequences to receiving data in more timely ways?


Technology has enabled many new business behaviors that were previously unthinkable. In 1950, when Frank McNamara issued the world's very first credit card, Diner's Club, he was reconciling transactions and payments by hand. Months could pass before you'd be expected to clear your account (ask your parents about "playing the float"). Today we have credit card companies that process transactions the instant our cards get swiped through the readers, and in many cases, there's no grace period for making payments. You could go out to dinner on a Saturday evening and incur a late charge on your card by the following Tuesday. Have companies been ethical in changing their behavior toward consumers as technology has provided vastly increased processing speeds?

#10: Does IT lead to job elimination, job routinization, or job enhancement?


Not to pass the buck, but to a certain extent this is a matter of perception, cause, and effect. Advances in technology have provided a wide variety of opportunities to streamline, change, or elminate business processes and operations. Factories that in another era employed thousands of workers can now turn out greater amounts of product with only a handful of technicians monitoring computer-controlled robotic arms.

On the other side of the coin, whole professions that didn't exist just a few short decades ago have provided significant new employment opportunities. Forty years ago, the average daily volume on the New York Stock Exchange was several million shares, while today it's routinely in the billions. Office workers no longer laboriously process transactions by hand; high speed computing enables productivity many times that at a quality level at least as high if not higher. The only constant is change, and while technology does in fact generate profound workplace change that is capable of displacing large numbers of workers, it also creates ample alternative opportunity. The ethical challenge is to manage that change such that quality of life improves for all.

Jeff Relkin has 30+ years of technology-based experience at several Fortune 500 corporations as a developer, consultant, and manager. He has also been an adjunct professor in the master's program at Manhattanville College. At present, he's the CIO of the Millennium Challenge Corporation (MCC), a federal government agency located in Washington, DC. The views expressed in this article do not necessarily represent the views of MCC or the United States of America.

Editor's Picks

Free Newsletters, In your Inbox