Will AI Be Able to Replace Judges Someday? Ethics, Biases and Human Emotions

¿Podrá algún día la IA sustituir a los jueces?

An Israeli company, AI21 Labs, has created an Artificial Intelligence based on one of the most renowned former U.S. Supreme Court Justices, Ruth Bader Ginsburg.

READ THE NEWS IN THE WASHINGTON POST: «This AI model tries to re-create the mind of Ruth Bader Ginsburg»

This AI has been trained based on the judge’s opinions, interviews and rulings during her more than 27 years of experience. It offers a chatbot that answers questions from the legal field.

Several professionals and organizations have run tests to see how accurate the chatbot’s answers are (whether it responds in the same way Ruth Bader Ginsburg would). Paul Schiff Berman, who clerked for Ginsburg asked the AI some questions and concluded that the answers left a lot to be desired. The Xataka team has also conducted their tests, asking the AI about privacy issues, labor rights and murder, and commented that the answers provided were quite logical.

You can also ask the IA-RBG to test the accuracy of the tool. You have to ask in English and the AI will answer yes, no or maybe, slightly arguing the answer.

This AI appears at a very controversial moment regarding the ethics of Artificial Intelligence. Although tools based on this technology are already being used in the justice system, they only serve as support. For example, algorithms to calculate the probability of recidivism of criminals.

This fact makes us wonder if at some point these support tools will become something more. On the one hand, this technology could be the solution to problems of criteria between judges, because it is not affected by human beliefs or emotions. You might say that its results would be more objective, based on data, but: Is AI really impartial?

In my opinion, we should not lose focus on the fact that the legal field, like many others, is constantly evolving. Machine learning models are trained with past historical data, so they are very conditioned to understand as “normal” some behaviors and laws that are obsolete. In addition to this, it is important to note that the impact of an incorrect decision is very high, so these models should have an accuracy very close to 100% correct.

I have no doubt that these algorithms can help judges make decisions by providing them with objective information on what has historically happened. But only as one more tool to help professionals in their decision making.


Daniel Herrero, Head of Artificial Intelligence at decide4AI

Leave a Reply

Your email address will not be published. Required fields are marked *