Warning: Using AI Could Hurt Your Legal Case

close-up-of-laptop-with-abstract-glowing-ai-chip-o-2026-03-26-05-25-25-utc-300x200

AI presents a new legal threat and people involved in litigation or who are contemplating litigation need to be aware of it.  You may have already heard about situations in which lawyers relied too heavily on AI and got in trouble with the court.  If not, just Google “lawyer who used AI to write brief”, and you can read case after case in state after state where lawyers have been sanctioned by courts for using AI to write their briefs.  While there is not a general prohibition on lawyers using AI, there are strict prohibitions about lawyers citing non-existent cases, erroneously quoting previous case law, etc.  In the cases of the sanctioned lawyers, AI “hallucinated” and gave the lawyers incorrect information and the lawyers did not double check its accuracy before submitting it to the Court.  Now, there is a new danger, and it is with clients asking AI legal questions either before or during the litigation.

In a recent case, a judge ruled that questions and answers between a party and the AI tool Claude were not protected by the attorney-client privilege and that all the information exchanged must be given to the attorneys on the other side of the case.  Communications between a client and a lawyer are almost always protected by the attorney-client privilege and not subject to disclosure.  However, communications with non-lawyer third parties are not protected.  The judge’s reasoning was that the defendant had voluntarily revealed confidential information to Claude and Claude’s own terms of use make it clear that the companies can use the data they receive from their users with third parties.  Interestingly, the chatbot’s terms also state that users should consult with a qualified legal professional for legal advice.

Conversely, in another recent case, the judge ruled the exact opposite in a situation where a woman had used AI in her employment case against her former employer.  In that case, the judge reasoned that the chatbot was a “tool” not a “person” and therefore did not rise to the level of disclosure to a third-party.

Two different outcomes in two different courts, but the lesson is clear.  Until courts or legislatures set defined rules on whether communications with AI are confidential, individuals considering litigation or involved in it should not use AI to discuss, research or organize their case.  Some law firms are cautioning their clients who want to use AI for their legal case to include language to the effect of “I am doing this research at the direction and request of my lawyer for my case against X”, etc.

Our advice is: do not use AI at all for case-related questions.  You hire a lawyer so you can fully disclose all issues and receive confidential advice about your case from an experienced attorney – do not get your legal advice from an AI program.  And respectfully, if you think you need to do your own research to double-check the advice of your lawyer, you probably need to find another lawyer.  To be sure, we often give advice that our clients do not particularly like.  Our job is to tell us what you need to hear not what you want to hear.  But, if you think you need to use Claude or any other AI tool to double check the accuracy of your lawyer, that is indicative of a deeper problem in your attorney-client relationship.

None of our lawyers are named Claude, but we are seasoned trial attorneys with decades of combined experience in helping injury victims and their families.  If you have questions about a potential claim, do not use AI.  Instead, call us for a free and confidential consultation.  We handle all injury and death cases on a contingency basis, so we only get paid if we recover money for you.  Call 615-742-4880 to get answers.

Contact Information