AI writing adoption is a usage story.

Trust is an authorship and transparency story.

AI writing tools are being adopted faster than social norms around trust, authorship, and disclosure are settling. The result is not one trust problem, but several. Nonfiction writers face credibility and accuracy pressure. Corporate writers face workflow opacity and brand risk. Fiction writers face the hardest authorship questions of all.

AI use is rising faster than comfort with AI
Sources: Pew Research, Brookings Institute

Three writing contexts. Three different trust problems.

Nonfiction

  • Fear of hallucinations corrupting factual claims
  • Disclosure standards and editorial accountability
  • Reader trust tied directly to accuracy
Pew Research, 2024–25

Corporate

  • Hidden AI use inside production workflows
  • Brand and legal exposure
  • Workers worried about AI displacement
Pew Research, 2024–25

Fiction

  • Authorship, authenticity, and training data ethics
  • Readers expect human involvement
  • Disclosure changes whether the work feels worth reading
Wakefield / Wattpad, YouGov / Black Château

Reader expectations for AI in books

Three numbers from two independent surveys, all pointing in the same direction.

Readers still want human authorship and transparency
Sources: Wakefield Research for Wattpad (2024); YouGov for Black Château (2025)
Research note

Blind reading research complicates this. Controlled studies found lay readers are not always able to reliably distinguish human from AI writing in literary excerpts, and did not always show a clear blind preference. Expert readers are consistently harsher. The reader-attitude data above measures what people say they want — not what they detect.

Disclosure affects trust. But not in one direction.

The journalism research does not all point the same way.

Disclosure affects trust, but not in one direction
Study findings — specific contexts, not universal percentages
Sources: Political Communication / Sage (2024); Gilardi et al. preregistered study (2024); YouGov for Black Château (2025)

The trust problem depends on who is writing and why.

Trust issues are not the same across writing contexts
High concern
Medium
Low
Not primary

Five design principles that follow from this research

  1. 01

    Treat authorship as a product surface.

    Not a legal footnote. Who or what wrote this is part of the product experience.

  2. 02

    Make AI participation inspectable.

    Users should be able to see where and how AI contributed, without hunting for it.

  3. 03

    Separate assistance from authorship.

    Helping a writer is not the same as writing for them. Products that blur this line create trust problems downstream.

  4. 04

    Support disclosure without making users guess.

    If the workflow produces AI-assisted content, disclosure should be an output of the tool, not an afterthought the user has to construct manually.

  5. 05

    Build for editorial review, not just generation speed.

    The bottleneck in trust-sensitive writing is not output volume. It is human judgment about what to use.

These are design conclusions drawn from the cited research, not universal product requirements.


This artifact combines national surveys (Pew, Brookings), book-reader surveys (Wakefield Research for Wattpad, YouGov for Black Château), and academic disclosure studies (Political Communication / Sage; Gilardi et al.). Some measures are direct population percentages from nationally representative samples. Some are study findings from specific experimental contexts. Fiction and book-reader data measures reader attitudes and disclosure response — not book sales or consumption volume. The writer-type trust matrix is interpretive, based on relative concern weighting across cited sources.