
Category: Process · Tags: ai, writing, gatekeeping, fear, creativity
On writing, making things, and refusing easy narratives.
This isn’t a defense of AI, nor an argument that everyone should use it. It’s an attempt to describe a pattern I keep seeing in creative spaces—and to ask what it says about fear, authority, and change.
I’m not interested in convincing anyone. I’m interested in understanding what’s actually happening beneath the noise.
While scrolling through my Facebook, Instagram, and Reddit feeds—mostly author and writer groups I belong to but rarely post in—I’ve noticed a recurring pattern.
A perfectly normal question gets asked.
“How does this opening chapter read?”
“Does this pacing feel off?”
“Which of these two covers do you like better?”
And then, almost immediately, the vibe shifts.
A craft question turns into a forensic investigation. Em dashes are counted. Sentence scaffolding is flagged, often by people who weren’t invited to do either. Someone announces, with great confidence and no hesitation, that the work “reads very AI.” The verdict lands fast. What follows is usually a helpful list of infractions: too symmetrical, too polished, not enough visible suffering in the prose.
Cover discussions go sideways even faster. What starts as a matter of taste—this image or that one—quickly escalates into a verdict. Both covers are “obviously AI generated.” The author is informed that they’re lazy, unethical, or simply not a real writer. If they can’t afford a “real” cover designer, it is gently suggested they reconsider their life choices, or at least their right to publish.
By this point, the original question has evaporated. No one is talking about composition, genre fit, or readability anymore. The work itself is gone, replaced by a spirited debate about tools, morals, and who exactly is allowed to call themselves an author.
There’s a familiar pattern playing out in creative spaces right now.
Someone asks a simple question—Which cover do you like better?—and within a few comments the discussion stops being about design and turns into something else entirely. Motives are questioned. Ethics are invoked. Assumptions are made about the quality of the work, the integrity of the author, and sometimes even whether the creator belongs in the space at all.
The tool becomes the trial, and intent is assumed guilty by default.
What’s striking is how quickly an AI-generated image is treated not as a production choice, but as a character flaw. An AI cover is assumed to mean the book itself is lazy, unoriginal, or unworthy. A human-made cover, by contrast, is often treated as a proxy for seriousness, effort, and legitimacy.
None of those assumptions actually hold up.
A beautifully hand-illustrated cover does not guarantee strong writing. An AI-generated cover does not invalidate the work inside the book. These are aesthetic signals, not moral ones. And yet, in many creative communities, they’re treated as shorthand for virtue or vice.
That disconnect is worth examining.
What’s happening here feels less like a debate and more like a familiar social reflex.
When new tools disrupt old systems, communities don’t just argue about quality. They look for someone to blame. Suspicion spreads faster than understanding. Accusations become shorthand. Repetition turns rumor into truth.
In other words, it starts to resemble a witch hunt.
Not the dramatic, torch-lit kind—but a digital version, quieter and more polite. No one is burned at the stake. Instead, credibility is questioned. Motives are assumed. Labels get applied quickly and defensively: AI-generated, therefore suspect.
The irony is that witch hunts have never been about witches. They’re about fear—fear of change, fear of losing status, fear of no longer knowing how to tell who belongs and who doesn’t.
AI didn’t invent that fear. It just gave it a new costume.
What’s often framed as a discussion about ethics quickly becomes something closer to boundary enforcement.
Once AI enters the conversation, it’s no longer about composition, typography, or genre fit. Instead, the discussion shifts toward who is “doing it right,” who is “cutting corners,” and who deserves professional credibility. The tone changes. Advice turns into accusation. Feedback turns into correction.
And oddly enough, it’s often at this moment that unrelated credentials appear.
“I’m a cover designer.”
“I’m an editor.”
“I do this professionally.”
These statements aren’t inherently wrong—but their appearance in a discussion that never asked for services is revealing. A question about preference becomes an opportunity to reassert expertise, relevance, and authority. The implication is subtle but clear: You shouldn’t be making choices like this without us.
That’s not guidance. That’s gatekeeping.
One of the most curious contradictions in these debates is which tools are considered acceptable.
“Don’t use AI,” people say—then recommend Canva.
Canva relies heavily on automation, generative systems, algorithmic design suggestions, and AI-assisted layout and image manipulation. Yet it’s routinely framed as the ethical alternative. The line isn’t actually about whether AI is involved. It’s about which tools feel familiar enough not to threaten existing hierarchies.
This raises an uncomfortable question:
At what point does assistance become unacceptable—and who gets to decide?
If the objection were truly about technology, the line would be clear. Instead, it moves depending on who benefits from it.
Another quiet assumption driving these reactions is the idea that difficulty equals worth.
For a long time, access to creative production was limited by money, time, training, and proximity to professionals. AI lowers some of those barriers. That doesn’t erase skill, taste, or discernment—but it does change who gets to participate.
When participation expands, authority feels threatened.
That fear often gets misdirected into moral language. Theft. Laziness. Devaluation. These are serious accusations, but they’re frequently deployed without specificity. They function less as critique and more as shorthand for discomfort.
It’s easier to condemn a tool than to interrogate why its existence feels destabilizing.
What unsettles people most is not that AI produces bad work, but that it sometimes produces acceptable work faster than expected. That disrupts a long-held belief that difficulty is proof of value. When effort has been mistaken for virtue, anything that reduces friction feels like cheating—even when the outcome still requires judgment, taste, and revision.
Fear doesn’t always announce itself as fear. Sometimes it shows up as certainty.
Perhaps the strangest leap in all of this is the assumption that a cover defines the content of the work.
Readers are warned that an AI cover means the writing must be bad, or that the author didn’t “really” write it. That leap isn’t evidence-based; it’s emotional. It turns a design choice into a moral forecast.
In reality, covers—human-made or otherwise—have always been marketing signals. They indicate genre, tone, and audience. They do not certify quality. They never have.
Judging a book by its cover is something we all do. Pretending that this judgment is ethical rather than aesthetic is where the trouble starts.
None of this is an argument that AI should be used everywhere, or that ethical questions don’t exist. They do. But reflexive outrage isn’t analysis, and moral certainty isn’t the same thing as clarity.
What’s worth questioning is why so many creative conversations shut down the moment AI appears—why curiosity is replaced with condemnation, and why nuance is treated as complicity.
Tools change. Markets shift. Creative ecosystems adapt or calcify.
The more interesting question isn’t whether AI belongs in creative spaces. It’s why so many people are afraid to examine what its presence reveals—about access, authority, and who gets to decide what “real” creativity looks like.
Witch hunts thrive on certainty. Creativity thrives on curiosity. When communities choose the former, they don’t protect art—they protect hierarchy.
History suggests that eventually, the tools change anyway.
Meanwhile, somewhere in a comment thread, a perfectly normal question goes unanswered.
Neither. This piece isn’t about taking sides. It’s about examining behavior—specifically how fear of change shows up as certainty, accusation, and gatekeeping in creative spaces.
No. Ethical questions exist in every creative tool shift. This piece intentionally focuses on social dynamics, not legal or environmental debates, which deserve their own discussions.
Lowering barriers to entry isn’t the same thing as lowering standards. Skill, taste, and discernment still matter. What’s changing is who gets to participate.
Many people do. Many can’t. And neither choice determines whether the work itself has value. A tool or budget doesn’t define authorship.
It doesn’t—at least not in the way outrage suggests. What interests me is how quickly curiosity disappears when fear takes over, and how easily rumor becomes truth in group settings.
Filed memorandum from Director Threnna:
What People Mean When They Say “Ethics” →
This space is for correspondence, not performance. Messages are reviewed and won’t appear publicly by default. If you want to engage—agree or disagree—email is the right channel.