The potentially dangerous and harmful implications of biases programmed in algorithms need to be addressed and possibly regulated. Who can we trust to oversee such regulations?
Jack Wallen, a respected TechRepublic columnist has been promoting the Open Source Initiative (OSI) for as long as I have been writing for TechRepublic—a long time in other words. His argument is simple: Open source is open; there are no secrets.
Those who believe in open source value the ability to personally determine whether the code is correct, and does what is advertised and only what is advertised. Like open-source pundits, there are individuals suggesting that same transparency needs to be applied to algorithms.
Simply put, algorithms use past data to predict what the future might look like. Algorithms are cropping up in most industries, and helping form decisions that affect all of us every day.
SEE: Bias in machine learning, and how to stop it (TechRepublic)
The potential impact of biases in algorithms
Most people are familiar with Google's powerful search engine. One task the search engine's algorithm performs is auto-filling the search box with a possible answer when text is entered. Then when the user hits the return key, the algorithm determines the ranking of the flagged websites and lists them in order—a simple example of a powerful tool at work.
Daniel Saraga, head of science communication at the Swiss National Science Foundation, asks in a recent Phys.org column, Should algorithms be regulated? For instance, think about a driverless car and its ability to recognize obstacles in the road: "The control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement."
With algorithms being proprietary "closed source," the answer to who is put in jeopardy—the driver or a passerby—is an unknown and carries the bias of the person/s who designed the algorithm-based control system. "They (algorithms) do not have prejudices and are unemotional," writes Alan Reid, senior lecturer in law at Sheffield Hallam University in this Conversation column. "But algorithms can be programmed to be biased or unintentional bias can creep into the system."
Reid offers examples of where algorithmic bias could have a significant impact:
- Corporations make decisions on how to treat customers and company employees using information provided by algorithms.
- Government organizations decide how to distribute services or dole out justice based on the output of data-driven algorithms.
It is easy to see the amount of influence algorithms can and will have over our lives. But, algorithms are for the most part proprietary. "Algorithms are usually commercially sensitive and highly lucrative," notes Reid. "Corporations and government organizations will want to keep the exact terms of how their algorithms work a secret."
Reid adds that algorithms are usually protected by patents and confidentiality agreements, making it nearly impossible to obtain an algorithm's inner workings.
SEE: Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
Should governments regulate algorithms?
As with proprietary software, getting source material for algorithms is likely off the table. Saraga brings up an interesting point in his Phys.org column: Should governments step in and regulate algorithms? He interviews two experts: one for and one against regulation.
Markus Ehrenmann of Swisscom is for regulation, saying:
"People have a right to an explanation about the decisions that affect them. And they have a right not to be discriminated against. This is why we have to be in a position to comprehend the decision-making processes of algorithms and, where necessary, to correct them."
Mouloud Dey of SAS agrees that algorithms potentially open to inappropriate use should be audited. However, he adds:
"Creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used."
We should know how algorithms affect us, though most of us are not capable of verifying an algorithm. People who are capable of verifying the code of algorithms must be trusted to determine whether the algorithm is accurate, does what is advertised, and only what is advertised.
What do you think? Should algorithms be regulated and, if so, by whom? Share your opinion in the article discussion.
- Decision-making algorithms: Is anyone making sure they're right? (TechRepublic)
- Data-driven policy and commerce requires algorithmic transparency (TechRepublic)
- Algorithms can be racist: Why CXOs should understand the assumptions behind predictive analytics (TechRepublic)
- 23 principles for beneficial AI: Tech leaders establish new guidelines (TechRepublic)
- Can AI really be ethical and unbiased? (ZDNet)
- For driverless cars, a moral dilemma: Who lives or dies? (Phys.org)
- Ethics Policy (Tech Pro Research)