
Unlike doctors or police officers, IT professionals, rarely have to consider the life or death of a fellow human as the result of our actions. That standard is rapidly changing, however, as we entrust technology-driven systems with increasingly risky activities where life and limb are at stake.
If you’re developing an autonomous or assisted driving system, how do you code for a situation where the owner of the vehicle would likely be killed to spare the life of a pedestrian? What happens when your automated warehouse severely injures a hapless human whose activities were not properly coded into an assembly robot? How do you react when your company is sued because the analytical model you designed appears to be discriminating against customers of a certain race, gender, or ethnicity? Better to incorporate these considerations in your design and testing process than wait for the public and legal scrutiny that arrives after the fact.
Technology innovation has outstripped legal and ethical innovation
The pace of technological innovation has always outpaced the rate at which our legal systems and cultural norms can adapt, but it’s the profound ethical implications of artificial intelligence, robotics, big data and the like that are making this discrepancy more acute. This is particularly troubling for IT leaders, as we are generally not tasked with determining the ethical implications of technologies we build and implement, and are often ill-equipped to study the ramifications. The structures we do have available, generally in the form of corporate legal support, are also poorly designed to handle these types of questions. Corporate council is generally tasked with minimizing risk and legal exposure, two goals that are generally opposed to innovation and pushing boundaries.
See: How driverless cars will transform auto insurance and shift burden onto AI and software
Look for the decision point
Most technologists and futurists believe that we’re still decades away from fully autonomous, “intelligent” machines, but we already have commercially available machines that are making potential life and death decisions. Whether it’s Tesla’s “autopilot” function that allows the vehicle to operate without driver input, or medical equipment that adjusts without human intervention, even these rudimentary systems could harm someone. The obvious risk is technical malfunction, but ethical concerns begin to appear when the machine could be faced with a decision between two damaging alternatives. If one of the primitive self-driving cars on the market detects an impending collision that will harm either the driver or another motorist, what logic should be coded to handle the situation? Who should know about this logic in advance? What value judgments should the machine make, meaning is the driver’s well-being more “valuable” than a motorist’s? Less valuable than a pedestrian’s?
As your teams encounter these types of decisions while building an autonomous system, encourage them to pause and elevate the discussion to a broader team that extends beyond the technical realm. Rather than attempting to apply their own ethical model, encourage your teams to identify the inputs and data available to the machine, and any constraints that should be accounted for when building an appropriate response.
See: Creating malevolent AI: A manual
Putting “change” back in change management
The discipline that is perhaps best equipped to raise ethical concerns is change management. When done well, change management accounts for the human impacts of technology deployments. Historically, these consisted primarily of training requirements and organizational changes, but identifying potential ethical concerns can also be considered as human impacts are analyzed. Just as one might engage a specialist for a complex technical or organizational problem, engage legal and ethical support when you encounter a potentially thorny ethical problem. Ethicists and philosophers may not be flooding your contact list, but they can help identify relevant ethical issues, articulate potential solutions, and help you communicate these concerns.
Since its inception, technology has gradually migrated from a back-office discipline to an integral part of our lives. That role is expanding to the point where technology is raising ethical considerations that must be identified, acknowledged, and resolved in a thoughtful and open manner. While few of us thought we’d be dealing with complex ethical and philosophical decisions when embarking on a technology career, it’s worth acknowledging the possibility and planning for a resolution process when we do.
Also see:
Ethics should be at the core of cybersecurity: Former cyber defence head
Big data ethics is a board-level issue
Researchers investigate the ethics of the Internet of Things
The tough questions of ethical content creation in VR