Innovation

AI can predict outcome of human rights trials, but should it?

A new study shows that AI has been 79% accurate in predicting the decisions of the European Court of Human Rights. Here's how it works, and why we still need human input to prevent bias.

istock90873305large.jpg
Image: iStockphoto/the-lightwriter

When we think of AI, we often think of the tech that it powers—like Google Maps, Alexa, and driverless cars. But AI has implications far beyond the tools we use for convenience at home, or efficiency in the office: It can make predictions about human decision-making.

A recent study, published in PeerJ Computer Science, shows how an AI algorithm by the University College London, the University of Sheffield, and the University of Pennsylvania was remarkably accurate in predicting judicial decisions. The AI examined judicial cases from the European Court of Human Rights—and found the computer to be 79% accurate in predicting the outcomes.

The algorithm, touted as the first of its kind, analyzes case text using machine learning.

While the study's authors don't think the algorithm can replace human judges or lawyers, it could be "useful for rapidly identifying patterns in cases that lead to certain outcomes," Dr. Nikolaos Aletras, who led the study at University College London, wrote in a release. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights."

The research showed that court judgments "are highly correlated to non-legal facts rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, 'realists' rather than 'formalists,'" according to the release.

According to the researchers, the language and topics of the cases were the most important predictors for determining the judgment. "The 'circumstances' section of the text includes information about the factual background to the case. By combining the information extracted from the abstract 'topics' that the cases cover and 'circumstances' across data for all three articles, an accuracy of 79% was achieved," the press release stated.

The study, however, just looks at the official, text-based court statements—not the arguments in court.

Toby Walsh, AI professor at the University of New South Wales, said he is "unimpressed."

The outcomes, he said, are going to be predicted based on the summary of the judgement. Furthermore, even if the judgment were ignored, "the summary is going to be inherently biased towards the decision, focusing on evidence and law that supports the decision."

SEE: New research shows that Swarm AI makes more ethical decisions than individuals (TechRepublic)

But beyond that, Walsh worries about the ethical issues around the work.

"Only humans and not machines should be given the task to judge on humans," he said. "Work that suggests we might one day replace judges with machines is profoundly misguided. We have seen other instances of this already. The COMPAS program in Florida was biased against black people, predicting that they would be more likely to reoffend than than do, potentially leading to judges give less probation and stricter bail terms to black people."

Other AI researchers agreed.

Roman Yampolskiy, head of the cybersecurity lab at the University of Louisville, was recently in South Korea, advising their Supreme Court on using AI. "My main message was to always have a human judge as the final deciding power, regardless of how well AI appears to perform in test cases," he said. "As the performance of AI systems improves beyond the reported 79% of human judges' level, it will be tempting to save costs by completely automating all components of the judiciary system, but it is a mistake to do so."

"If designed properly and supplied with clean training data, AI may improve on human judges in regards to certain types of bias, such as racism or sexism," said Yampolskiy. "However, AI will always be worse with regards to 'human values bias,' which gives preferential treatment to human life and human preferences over everything else, a bias which I think is highly desirable from a system with a power to impact or even take away human lives."

To be fair, according to Vasileios Lampos, one of the study's authors, purpose of this study is not to replace judges. "An improved version of the tool proposed in our paper may be used to prioritize the administrative processing of cases with a clear 'Violation' or 'Non-Violation' prior, in order to reduce more timely the long queue of cases that have not been tried," he said.

Also, he pointed out that this is "just a proof-of-concept paper."

More data is required, he said, before "this proof-of-concept tool can be translated to anything of an actual substance and usability. At the same time, law experts need to establish the right platform in order to make such technological advancements usable," said Lampos.

Even so, "we need to be very, very careful in developing computer programs in areas like this," said Walsh. "And even more careful in interpreting their results and deciding how society uses them."

Also see...

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox