MELBOURNE, Australia — A senior lawyer in Australia has issued an apology to a judge after submitting documents in a murder case that contained fabricated quotes and nonexistent legal precedents generated by artificial intelligence. This incident, which occurred in the Supreme Court of Victoria, highlights ongoing issues with AI's role in legal proceedings worldwide.

Defense attorney Rishi Nathwani, a King’s Counsel, accepted "full responsibility" for the inaccuracies in the submissions related to a teenager charged with murder. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, representing the defense team.

The errors caused a 24-hour delay in the case, which Justice Elliott had hoped to resolve promptly. On Thursday, he ruled that Nathwani’s client, who is a minor and cannot be named, was not guilty of murder due to mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott remarked. He emphasized that the court must be able to rely on the accuracy of submissions made by counsel, which is essential for the fair administration of justice.

The submissions included false quotes attributed to a speech in the state legislature and citations from the Supreme Court that did not exist. Elliott’s associates discovered the discrepancies when they could not locate the cited cases and requested copies from the defense. The defense lawyers later admitted that the citations "do not exist" and acknowledged the presence of "fictitious quotes" in their documents. They explained that while they had verified some initial citations, they mistakenly assumed the others were accurate as well.

The submissions were also shared with prosecutor Daniel Porceddu, who did not verify their accuracy. Justice Elliott pointed out that the Supreme Court had issued guidelines last year regarding the use of AI by lawyers. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," he stated.

The specific AI system used by the lawyers has not been disclosed in court documents. This incident follows a similar case in the United States in 2023, where a federal judge fined two lawyers and a law firm $5,000 after they submitted fictitious legal research generated by ChatGPT in an aviation injury claim. Judge P. Kevin Castel noted that while the lawyers acted in bad faith, their apologies and corrective actions prevented harsher penalties.

Later that year, additional fictitious court rulings created by AI were cited in legal documents submitted by lawyers for Michael Cohen, a former personal attorney to U.S. President Donald Trump. Cohen accepted responsibility, stating he was unaware that the Google tool he used for legal research could produce such inaccuracies.

In June, British High Court Justice Victoria Sharp warned that presenting false information as genuine could lead to contempt of court charges or, in severe cases, perverting the course of justice, which could result in a life sentence.