Can the Government Seize Your AI Prompts? (with Tiffany Eggers)

One Minute Matters Video Series

3.06.26

Can the government seize your AI prompts and potentially use them as evidence? Can opposing counsel force you to turn them over in discovery? Two federal courts answered these questions on the same day:

U.S. v. Heppner (S.D.N.Y.)

Criminal defendant argued that his Claude AI prompts were protected by attorney-client privilege and work product doctrine. The District Court Judge disagreed. Because the defendant created the prompts on his own (not at his attorney's direction) and Claude's privacy policy permits disclosure to third parties, including the government, no privilege applied.

Warner v. Gil-baro (E.D. Mich.)

The Magistrate Judge ruled that a pro se plaintiff's ChatGPT prompts were protected work product and did not need to be disclosed to the opposing party. The court noted that work product waiver only occurs when materials are disclosed to an adversary or in a way likely to get in an adversary’s hand, which hadn't happened.

Key takeaways:

  • AI prompts are not necessarily privileged
  • In criminal cases, the government treats AI prompts like search engine queries (i.e., evidence that can be seized)
  • Privilege depends on who created the prompts, why, and whether counsel was involved

The government has already obtained AI prompts as evidence in cases ranging from fraud to arson. If you're using AI, understand the risks and consult counsel before assuming your prompts are protected.