
Image by Wesley Tingey, from Unsplash
AI Assists Dutch Court in Drafting Criminal Verdicts
The Rotterdam District Court recently conducted an AI-assisted experiment for drafting the sentencing motivation of a criminal case.
In a rush? Here are the quick facts:
- Rotterdam Court tested AI as a writing aid for criminal verdicts.
- AI helped draft sentencing motivation, but judges made final decisions.
- No private case data was shared with AI during the trial.
The AI was not involved in the decision-making process, but was only used as a tool to help prepare the section explaining the court’s reasoning behind the punishment, as explained by the judiciary body, de Rechtspraak.
The court notified all parties immediately after the trial about AI implementation in the process while also releasing an official statement about the experiment, as explained by the NL Times.
AI used general case information to generate draft documents but judges together with the clerk reviewed these drafts before completing the sentencing motivation. AI did not take part in legal decision-making, or fact evaluation during the process.
NL Times explains that the AI tool accelerated drafting speed and improved text structure according to the judges who utilized it.
The court personnel who conducted the trial expressed favorable outcomes but they maintained their reservations about implementing public AI systems in judicial procedures, as noted by de Rechtspraak.
They observed that public AI tools operate with restricted input capabilities which diminishes their performance. Additionally, the use of public AI systems to share case details creates an ethical or legal risk because of potential disclosure of sensitive information.
To overcome these concerns, judges proposed implementing an internal secure AI system which would operate exclusively within judicial processes to resolve these problems.
The system would operate with only confidential information so that more detailed and relevant case information could be entered. A possible approach would be to establish a secure database of all publicly available criminal verdicts from the judiciary’s website. This database could assist the AI in generating more accurate sentencing drafts, as noted by de Rechtspraak.
However, the implementation of AI technology within judicial systems comes with certain risks. Indeed, AI-generated “hallucinations” which produce incorrect or fabricated data have become a legal problem in U.S. courts. In a recent example, Walmart lawyers used AI-generated fake cases during a lawsuit which led to potential sanctions. As a result, the legal community continues to express increasing doubts about AI system reliability.
Leave a Comment
Cancel