Anthropic has admitted that its AI chatbot, Claude, generated a fake legal citation used in a high-stakes copyright lawsuit against major music publishers — and its legal team missed it.
In a court filing submitted Thursday in Northern California, Anthropic’s lawyers acknowledged the error, stating the flawed citation had “an inaccurate title and inaccurate authors.” The source was created entirely by Claude, the company’s own generative AI model. Despite a manual citation check, the mistake slipped through. Anthropic described the incident as “an honest citation mistake and not a fabrication of authority.”
The issue came to light after Universal Music Group and several music publishers accused Anthropic’s expert witness, Olivia Chen, of referencing multiple non-existent articles during her testimony. The judge, Susan van Keulen, demanded a formal response from Anthropic following the allegations.
This legal misstep highlights the growing risks of relying on AI in sensitive settings like courtrooms — especially as “hallucinations,” or AI-generated falsehoods, remain a persistent problem.
And Anthropic isn’t alone. Just days earlier, a California judge criticized law firms for submitting “bogus AI-generated research,” and a separate case in Australia earlier this year saw a lawyer caught relying on ChatGPT, which also produced flawed citations.
Yet despite repeated failures, interest in AI for legal work is booming. Harvey, a fast-growing legal tech startup that uses generative AI to assist lawyers, is reportedly seeking over $250 million in new funding, aiming for a $5 billion valuation.
As lawsuits over copyright and AI training data continue to mount, the stakes — and the scrutiny — surrounding AI-generated legal content are only rising.