The AI Disclosure Dilemma: How to Tell People You're Using AI (Without Freaking Them Out)

So, you just used an AI to do something brilliant. 🤖

Maybe it transcribes a dozen user interviews in the time it takes to make coffee. Or perhaps it synthesized a mountain of open-ended survey feedback into stunningly accurate themes. You’re feeling efficient, innovative, and maybe just a little bit like you have superpowers.

Then, the feeling hits….the guilt sets in. You feel worried that you’re cheating somehow—yourself, your brain, and your audience. That little voice in the back of your head that whispers, "...do I need to tell anyone about this?"

Welcome, my friend, to the AI Disclosure Dilemma…you’re damed if you do, but you’re damned if you don’t.

The Challenge: A Complex Web of Rules and Expectations

Researchers are facing a rapidly evolving set of expectations. Groundbreaking regulations like the EU AI Act and pioneering state laws in California and Colorado are establishing clear legal mandates for AI transparency. These laws require organizations to inform users when they are interacting with AI and to be exceptionally clear about how high-stakes decisions are made.

Beyond compliance, our participants and stakeholders expect honesty. Researchers at the University of Arizona have identified a real phenomenon known as the "transparency paradox," where disclosing that you used AI can sometimes lead people to trust you less (however, getting “caught” is worse) (Schilke et al. 2025). The challenge is creating disclosures that are not only legally sound but also psychologically effective—building confidence rather than eroding it.

(For the record, getting "caught" is way, way worse.)

The non-negotiable Path forward

So, what are we supposed to do? Ignore the AI elephant in the room and hope no one asks?

Not a chance. Not only is it unethical, but it’s also illegal. With new regulations, such as the EU AI Act, emerging, failing to disclose AI use is quickly becoming a major compliance red flag (EU Artificial Intelligence Act, 2025).

The bottom line is that disclosure is not optional.

The real question is: how do we do it right?


Your Disclosure Cheat Sheet: What to Say and to Whom

After diving deep into the research (check out the curated NotebookLM here), it's clear that the most effective disclosures are tailored to their audience. Here’s what you need to cover.

 

For Your Research Participants (The Human Element)

When you're talking to the people whose data you're collecting, keep it simple, clear, and focused on them. Your consent form should answer:

  • How will AI be involved? Are you using it to analyze their feedback after the fact, or will they be interacting with an AI during the study? Be clear about the role it plays.

  • How is my data being protected? This is non-negotiable. Reassure them that all data is anonymized and their privacy is protected.

  • Will my data be used to train the AI? This is a big one. Be upfront about whether their anonymized data will help improve the model. If it is, you should also state whether they can opt out.

 

For Your Stakeholders (The Credibility Check)

When you're presenting your findings to your team, leadership, or the public, your goal is to build confidence in your methodology. Your disclosure needs to show rigor, not shortcuts. Make sure you cover:

  • What tools were used? Be specific. "AI" is vague; "ChatGPT-4o" is transparent.

  • What was the AI's exact job? Did it transcribe interviews? Did it generate thematic codes? Did it create highlight reels? Detail the specific tasks.

  • What was the level of human oversight? This is the most important part. Stating that "all AI-generated outputs were reviewed, refined, and validated by a human" is the single most powerful way to build trust and show that the AI was a tool, not the author of the insights.

 

For Academic Publishers (The Reproducibility Check)

Based on the AID Framework developed by Kari Weaver (Weaver, K., 2024), when you're presenting your findings for publication and peer review, your goal is to build confidence in your methodology and all the information necessary to reproduce your findings. Make sure you cover:

  • What tools were used? Be specific. "AI" is vague; "ChatGPT-4o" is transparent.

  • What was the AI's exact job? Did it transcribe interviews? Did it generate thematic codes? Did it create highlight reels? Detail the specific tasks.

  • What was the level of human oversight? This is the most important part. Stating that "all AI-generated outputs were reviewed, refined, and validated by a human" is the single most powerful way to build trust and show that the AI was a tool, not the author of the insights.

  • How were privacy and security handled? Was any of the information shared with the public models at any point?


Tired of Wrestling with the Wording? I Built a Tool for That.

Let's be honest—remembering all these rules and tailoring the language for different audiences can feel like a full-time job. I grew tired of second-guessing myself, so I collaborated with AI to develop a solution that makes it easier for the community….

Introducing the UXR AI Disclosure Drafter.

Yes. This was “vibe-coded” (sidenote: I really dislike the vibification of everything).

It’s a simple, free tool that turns all this complexity into a straightforward, conversational workflow. In just a few clicks, it helps you generate a clear, objective, and professionally worded disclosure statement tailored for:

  • Research Participants

  • Internal & External Stakeholders

  • Academic Publications (aligned with the AID Framework)

No more awkward phrasing or compliance anxiety. Just a clear, confident statement that builds trust and lets you get back to the work that matters.

I believe that transparency isn't a chore; it's a trust-building opportunity. It’s how we, as researchers, show respect for our participants and demonstrate the integrity of our work to our stakeholders.

Ready to give it a try? Check it out here >

I believe that transparency isn't a chore; it's a trust-building opportunity. It’s how we show respect for our participants and demonstrate the integrity of our work. Let's continue to build a future where innovation and trust go hand in hand.


Resources

  1. Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 188, 104405. https://doi.org/10.1016/j.obhdp.2025.104405

  2. EU Artificial Intelligence Act. (n.d.). EU Artificial Intelligence Act. Retrieved August 3, 2025, from https://artificialintelligenceact.eu/

  3. California Legislature. (2023). A.B. 331. Automated decision tools. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB331

  4. Colorado General Assembly. (2024). S.B. 24-205. Concerning consumer protections for artificial intelligence. https://leg.colorado.gov/bills/sb24-205

  5. Weaver, K. (2024). The Artificial Intelligence Disclosure (AID) Framework: An Introduction. College & Research Libraries News, 85(10), 407. doi: https://doi.org/10.5860/crln.85.10.407