Anthropic Apologizes for AI-Generated False Citation in Copyright Case
Anthropic's lawyer has apologized for using a fabricated legal citation generated by the company's AI chatbot, Claude, in an ongoing copyright lawsuit against music publishers. The incident, revealed in a Northern California court filing, highlights the potential pitfalls of relying on AI in legal proceedings.
According to Anthropic, Claude hallucinated the citation, providing an inaccurate title and author list. The company's lawyers admitted their manual check failed to catch the error, along with other inaccuracies caused by Claude.
Anthropic has apologized for the mistake, characterizing it as an honest citation error and not an intentional fabrication. The incident stems from a lawsuit filed by Universal Music Group and other music publishers, who accuse Anthropic's expert witness of using Claude to cite non-existent articles in her testimony.
AI in Legal Proceedings Under Scrutiny
This incident is the latest in a series of legal missteps involving AI. Recently, a California judge criticized law firms for submitting AI-generated false research. In another case, an Australian lawyer was found to have used ChatGPT, which produced faulty citations.
Despite these setbacks, investment in legal tech continues. Harvey, a startup using AI to assist lawyers, is reportedly seeking funding at a $5 billion valuation.
The music publishers' lawsuit is part of a broader conflict between copyright holders and tech companies over the use of copyrighted material in training generative AI models. This incident underscores the growing concerns surrounding the reliability and ethical implications of using AI in legal contexts.