Law Apparent AI mistakes force two judges to retract separate rulings

Source
Archive

Two U.S. judges in separate federal courts scrapped their rulings last week after lawyers alerted them to filings that contained inaccurate case details or seemingly "hallucinated" quotes that misquoted cited cases — the latest in a string of errors that suggest the growing use of artificial intelligence in legal research and submissions.

In New Jersey, U.S. District Judge Julien Neals withdrew his denial of a motion to dismiss a securities fraud case after lawyers revealed the decision relied on filings with "pervasive and material inaccuracies."

The filing pointed to "numerous instances" of made-up quotes submitted by attorneys, as well as three separate instances when the outcome of lawsuits appeared to have been mistaken, prompting Neals to withdraw his decision.

In Mississippi, U.S. District Judge Henry Wingate replaced his original July 20 temporary restraining order that paused enforcement of a state law blocking diversity, equity and inclusion programs in public schools after lawyers notified the judge of serious errors submitted by the attorney.

They informed the court that the decision "relie[d] upon the purported declaration testimony of four individuals whose declarations do not appear in the record for this case."

Wingate subsequently issued a new ruling, though lawyers for the state have asked his original order to be placed back on the docket.

"All parties are entitled to a complete and accurate record of all papers filed and orders entered in this action, for the benefit of the Fifth Circuit’s appellate review," the state attorney general said in a filing.

A person familiar with Wingate's temporary order in Mississippi confirmed to Fox News Digital that the erroneous filing submitted to the court had used AI, adding that they had "never seen anything like this" in court before.

Neither the judges’ office nor the lawyers in question immediately responded to Fox News Digital’s requests for comment on the retracted New Jersey order, first reported by Reuters. It was not immediately clear if AI was the reason for that erroneous court submission in that case.

However, the errors in both cases — which were quickly flagged by attorneys, and prompted the judges to take action to revise or redact their orders — come as the use of generative AI continues to skyrocket in almost every profession, especially among younger workers.

In at least one of the cases, the errors bear similarities to AI-style inaccuracies, which include the use of "ghost" or "hallucinated" quotes being used in filings, citing incorrect or even nonexistent cases.

For bar-admitted attorneys, these erroneous court submissions are not taken lightly. Lawyers are responsible for the veracity of all information included in court filings, including if it includes AI-generated materials, according to guidance from the American Bar Association.

In May, a federal judge in California slapped law firms with $31,000 in sanctions for using AI in court filings, saying at the time that "no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material."

Last week, a federal judge in Alabama sanctioned three attorneys for submitting erroneous court filings that were later revealed to have been generated by ChatGPT.

Among other things, the filings in question included the use of the AI-generated quote "hallucinations," U.S. District Judge Anna Manasco said in her order, which also referred the lawyers in question to the state bar for further disciplinary proceedings.

"Fabricating legal authority is serious misconduct that demands a serious sanction," she said in the filing.

New data from the Pew Research Center underscores the rise of AI tools among younger users.


According to a June survey, roughly 34% of U.S. adults say they have used ChatGPT, the artificial intelligence chatbot — roughly double the percentage of users who said the same at the same point two years ago, in 2023.

The share of employed adults who use ChatGPT for work has spiked by a whopping 20 percentage points since June 2023; and among adults under 30, adoption is even more widespread, with a 58% majority saying they have used the chatbot.
 
It's such a cliche at this point that one would think that people would be ultra cautious and read over everything to check for errors, but then again, even people intelligent enough to pass the bar exam can be tech illiterate niggercattle.
No, I wanna go back to playing Candy Crush!
 
Jensen Huang seething rn because he knows the only reason why his company is valued at over $4T is because retarded boomer investors are tech illiterate. Time to say goodbye to your heavily inflated net worth and go back to catering to gamers, Jensen.
 
the latest in a string of errors that suggest the growing use of artificial intelligence incompetent laziness in legal research and submissions.
This is what AI will do, it will unmask the "experts" as anything but.... who were either never deserving of their credentials in the first place, or, who threw them away to get paid like an expert without expert-tier work.
These fuckers are paid waaay too much, can do almost whatever they please, and they still can’t be arsed to do some work, but actually have chat fucking GPT write their rulings?!

The American elites, especially including all the judges, really need a day of the rope.
It wasn't the Judges, it was the attorneys who argued their cases using caselaw that didn't exist and was completely AI generated. The Judges did the right thing and threw out a ruling that was built on the sloppy work of those who argued in front of them.

It's such a cliche at this point that one would think that people would be ultra cautious and read over everything to check for errors, but then again, even people intelligent enough to pass the bar exam can be tech illiterate niggercattle.
This is like the 5th or 6th time it's happened, too. You'd think after the first lawyer who did it got caught they'd shape up? But they just assumed they used the wrong prompts, I guess.... the entire profession needs to get a handle on this, fast, or they'll be out of work. Nobody is going to pay $500 an hour for AI slop if they can generate it at home for free.
 
It wasn't the Judges, it was the attorneys who argued their cases using caselaw that didn't exist and was completely AI generated. The Judges did the right thing and threw out a ruling that was built on the sloppy work of those who argued in front of them.
No. The judge did blindly accept and rule in their favor without checking any of those phantom cases or citations. It wasn't until after his ruling when the other side investigated it that it all came out.

I watch plenty of court videos and have seen many, many judges stop in the middle of proceedings to look up case law if they don't know it. This judge didn't do that and also didn't look any of it up in the days/weeks before rendering his decision. They were sloppy as fuck and it makes the whole court system look bad in return.
 
Were the two judges BOTH black?
No. The judge did blindly accept and rule in their favor without checking any of those phantom cases or citations.
Any federal judge who does this should be disbarred. No exceptions. It will completely torpedo our legal system if fake cases make it into federal case law. The judge's decision will be cited in future cases, etc.
 
The worst part of an LLM when interacting with tech illiterate people os how it responds with absolute conviction. Lies and bullshit go through a lot better when spoken with a sense of authority, people tend to default "it must know what it's doing".

Everything one of these tools delivers is in that tone, and even when you call out it's bullshit, it will do an apology but still deliver something incorrect. Add on top that it's as competent as it's volume of data.

In software developmemt terms, it's pretty solid when it comes to sql since that has been static for decades, but on something that is a lot more mutable due to having multiple minor and major versions, it's reliability falls off a cliff since it will build a response based on stuff found in 4 different versions. And all of it pajeet tier in quality as well.

Search engines not giving personalized answers made people actually stop and read to understand if the search results were relevant to what they want to know, but an LLM will go "this is your answer" and anybody mildly lazy will go "yeah, sounds about right".
 
One small point of mitigation is that I'd bet a thousand KNUs that both times were the result of misplaced trust in some dipshit newly-graduated law clerk. Which opens an entirely different can of worms but at least doesn't mean the judge is personally generating opinions via AI.
 
Back