
Attorney-client privilege was built for a human fiduciary relationship and does not extend to consumer AI platforms. In United States v. Heppner, a court treated AI conversations as third-party disclosures, not protected communications. Millions of people are now sharing legal exposure with AI systems that have no fiduciary duty, no confidentiality obligation, and terms of service that may make their inputs discoverable. The next frontier of “trusted AI” will not be about model quality. It will be about evidentiary defensibility, governance, confidentiality, and jurisdictional control.
AI is now embedded in legal workflows. Lawyers use it to summarize case law, draft arguments, review contracts, and analyze evidence. Clients increasingly turn to ChatGPT, Claude, and other AI systems before they ever speak to a lawyer.
That creates a real tension. Lawyers are using AI tools internally. Clients are using AI before they talk to lawyers. But privilege law was built around a human fiduciary relationship.
The legal system is now being forced to answer a question it was never designed for:
What happens when people start treating AI systems like lawyers, strategists, therapists, and confidential advisors?
The Early Case Law Is Starting to Arrive
The first wave of AI-related legal decisions focused largely on hallucinated citations and improper filings. Courts in the United States and elsewhere have sanctioned lawyers for submitting fake cases generated by AI systems. Those rulings established an early principle:
Lawyers remain responsible for AI-generated work product.
A more profound issue is now emerging: attorney-client privilege.
One of the first major cases to confront this directly was United States v. Heppner in 2026. The case reportedly involved the use of Anthropic’s Claude AI system to analyze legal exposure and generate legal-related materials.
The court concluded that the AI-generated conversations and outputs were not protected by attorney-client privilege.
The reasoning matters. Claude was not a lawyer. No attorney-client relationship existed. The communications may not have remained confidential due to platform terms and provider access. Sharing AI outputs later with legal counsel did not retroactively create privilege.
The ruling treated the AI system as a third party rather than a protected legal intermediary.
That distinction matters enormously because attorney-client privilege depends heavily on confidentiality. In many jurisdictions, privilege can be waived if confidential legal communications are shared with outsiders.
Courts are now beginning to ask:
Is entering sensitive information into a consumer AI platform equivalent to voluntarily disclosing it to a third party?
If courts continue answering “yes,” millions of users may be waiving privilege every day without knowing it.
AI Changes Human Behaviour Before Law Adapts
One of the most important parts of this issue is behavioural rather than technical.
People already interact with AI systems differently than traditional software. They confess fears, disclose legal risks, share business strategy, upload contracts, discuss employment disputes, and seek quasi-legal advice. In many cases, they are more candid with AI than they are in email or formal communications.
These systems are not governed by the same fiduciary obligations as lawyers.
Traditional attorney-client privilege evolved around licensed professionals, ethical duties, confidentiality obligations, professional discipline, and clearly understood relationships of trust.
Consumer AI platforms operate under a fundamentally different model: terms of service, cloud retention, vendor infrastructure, model training pipelines, logging systems, and multinational data processing.
The social behaviour has changed far faster than the legal framework. That gap is going to produce ugly outcomes for ordinary people who assumed they were talking in private.
Enterprise AI May Become Legally Distinct
An important divide is going to emerge between public consumer AI systems and enterprise or legal-grade AI environments.
Courts may eventually distinguish between entering sensitive information into a public chatbot and using a tightly governed enterprise AI system under legal supervision.
That distinction could depend on factors such as contractual confidentiality protections, isolated model environments, disabled retention and training, sovereign hosting, audit controls, and supervision by licensed counsel.
This has enormous implications for enterprise AI architecture.
The future of “trusted AI” will not be primarily about model quality or speed. It will depend on evidentiary defensibility, governance, confidentiality, and jurisdictional control.
In other words:
AI infrastructure itself is becoming part of legal risk management.
Discovery Risks May Be Far Larger Than Most Organizations Realize
The discovery implications are staggering.
People type things into AI systems they would never put into email, Slack, Teams, or formal memos.
But AI conversations may become discoverable records, evidence of intent, contemporaneous reasoning logs, or internal admissions.
Organizations are now generating entirely new classes of sensitive records at massive scale, often without fully understanding where they are stored, who can access them, how long they persist, or how courts may eventually treat them.
This is a new category of institutional risk that most governance frameworks are not prepared for.
The Larger Question
The deeper issue is not whether AI can assist legal work. It clearly can, and increasingly will.
The real question is whether our legal concepts of trust, confidentiality, and privilege can survive when human advisory relationships are partially replaced by probabilistic software systems operated by global technology vendors.
Attorney-client privilege was designed for a world where confidential advice came from humans bound by professional duties.
AI has introduced something different: systems that feel conversational, appear authoritative, encourage disclosure, but may not legally protect the people using them.
The courts are only beginning to grapple with the consequences. The people typing into these systems are not waiting for them to catch up.
Frequently Asked Questions
Are conversations with ChatGPT or Claude protected by attorney-client privilege?
Generally no. Attorney-client privilege requires a relationship with a licensed attorney bound by professional duties. AI systems are not lawyers, and courts are beginning to treat AI providers as third parties. In United States v. Heppner, the court ruled that conversations with Claude were not privileged, even when later shared with counsel.
Can sharing AI outputs with my lawyer create privilege after the fact?
Probably not. Courts have so far declined to extend privilege retroactively. The original AI conversation typically already involved disclosure to a third party, which under most jurisdictions waives privilege. Sharing the outputs with a lawyer doesn’t undo that initial disclosure.
Are enterprise AI deployments treated differently than consumer chatbots?
Likely yes, over time. Courts may distinguish enterprise AI environments with contractual confidentiality, no-training and no-retention guarantees, sovereign hosting, audit controls, and licensed-counsel supervision from public consumer chatbots. The legal protection won’t come from the model. It will come from the architecture and governance around it.
Could my AI conversations be subpoenaed?
Yes. AI conversations stored by a vendor may be subject to subpoena, discovery requests, and law enforcement process, depending on jurisdiction and the vendor’s terms. People routinely disclose things to AI they would never put in email. Those records may now be evidence.
What should organizations do to manage AI legal risk?
Treat AI conversations like any other discoverable record. Move sensitive workflows onto enterprise environments with contractual confidentiality, no-retention and no-training settings, audit logging, and ideally sovereign hosting. Train staff on what not to share with consumer AI tools. Update governance frameworks to include AI conversations alongside email, Slack, and other communications systems.