The Move Towards Explainable Artificial Intelligence and its Potential Impact on Judicial Reasoning

By Irene Ng (Huang Ying)

In 2017, the Defense Advanced Research Projects Agency (“DARPA”) launched a five year research program on the topic of explainable artificial intelligence.[1] Explainable artificial intelligence, or also known as XAI, refers to an artificial intelligence system whereby its decisions or output are explainable and understood by humans.

The growth of XAI in the field of artificial intelligence research is noteworthy considering the current state of AI research, whereby decisions made by machines are opaque in its reasoning and, in several cases, not understood by their human developers. This is also known as the “black box” of artificial intelligence; when input is being fed into the “black box”, an output based on machine learning techniques is produced, although there is no explanation behind why the output is as it is.[2] This problem is not undocumented – there have been several cases when machine learning algorithms have made certain decisions, but developers are puzzled at how such decisions were reached.[3]

The parallel interest in the use of artificial intelligence in judicial decision-making renders it interesting to consider how XAI will influence the development of an AI judge or arbitrator. Research in the use of AI for judicial decision-making is not novel. It was reported in 2016 that a team of computer scientists from UCL managed to develop an algorithm that “has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy”.[4] Much however remains to be said about the legal reasoning of such an AI-verdict.

The lack of an explainable legal reasoning is, unsurprisingly, a thorny issue towards pressing for automated decision-making by machines. This sentiment has been echoed by several authors who have written in the field of AI judges or AI arbitrators.[5] The opacity in the conclusion of an AI-verdict is alarming for lawyers, especially where legal systems are predicated on the legal reasoning of judges, arbitrators or adjudicators. In certain fields of law, such as criminal law and sentencing, the lack of transparency in the reasoning by an AI-judge in reaching a sentencing verdict can pose further moral and ethical dilemmas.

Furthermore, as AI judges are trained by datasets, who ensures that such datasets are not inherently biased so as to ensure that the AI-verdict will not be biased against specific classes of people as well? The output generated by a machine learning algorithm is highly dependent on the data that is fed to train the system. This has led to reports highlighting “caution against misleading performance measures for AI-assisted legal techniques”.[6]

In light of the opacity in legal reasoning provided by AI judges or AI arbitrators, how would XAI change or impact the field of AI judicial decision-making? Applying XAI in the field of judicial decision-making, an XAI judge or arbitrator would produce an AI verdict and produce a reasoning for such a decision. Whether such reasoning is legal or factual, or even logical, is not important at this fundamental level – what is crucial is that a reasoning has been provided, and such reasoning can be understood and subsequently challenged by lawyers, if disagreed upon. Such an XAI judge would at least function better in legal systems whereby appeal of the verdict is based on challenges to the reasoning of the judge or arbitrator.

This should also be seen in light of the EU’s upcoming General Data Protection Regulation (“GDPR”), whereby a “data subject shall have the right not to be subject to a decision based solely on automated processing”[7] and it appears uncertain at this point whether a data subject has the right to ask for an explanation about an algorithm that made the decision.[8] For developers that are unable to explain the reasoning behind their algorithm’s decisions, this may prove to be a potential landmine considering the tough penalties for flouting the GDPR.[9] This may thus be an implicit call to move towards XAI, especially for developers building AI judicial decision-making software that uses personal data of EU citizens.

As the legal industry still grapples with the introduction of AI in its daily operations, such as the use of the ROSS Intelligence system,[10] the development of other fields of AI such as XAI should not go unnoticed. While the use of an AI judge or AI arbitrator is not commonplace at the present moment, if one considers how XAI may be a better alternative for the legal industry as compared to traditional AI or machine learning methods, development of AI judges or arbitrators using XAI methods rather than traditional AI methods might be more ethically and morally acceptable.

Yet, legal reasoning is difficult to replicate in an XAI – the same set of facts can lead to several different views. Would XAI replicate these multi-faceted views, and explain them? But before we even start to ponder about such matters, perhaps we should first start getting the machine to give an explainable output that we can at least agree and disagree about.

[1] David Gunning, Explainable Artificial Intelligence (XAI), https://www.darpa.mil/program/explainable-artificial-intelligence.

[2] BlackBox, AI, https://www.sentient.ai/blog/understanding-black-box-artificial-intelligence/

[3] Will Knight, The Dark Secret at the Heart of AI, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

[4] Chris Johnston and agencies, Artificial intelligence ‘judge’ developed by UCL computer scientists, October 24, 2016, online: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.

[5] See José Maria de la Jara & Others, Machine Arbitrator: Are We Ready?, May 4, 2016, online: http://arbitrationblog.kluwerarbitration.com/2017/05/04/machine-arbitrator-are-we-ready/.

[6] AI Now 2017 Report, online: https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf.

[7] Article 22, General Data Protection Regulation.

[8] https://medium.com/trustableai/gdpr-and-its-impacts-on-machine-learning-applications-d5b5b0c3a815

[9] Penalties of GDPR can range from 10m eur or 2% of the worldwide annual revenue on the lower scale and 20m or 4% of the worldwide revenue on the upper scale. See Article 83, General Data Protection Regulation.

[10] ROSS Intelligence, online: https://rossintelligence.com/.

Advertisements