Why People Who Criticize AI Are Starting to Look Like AI - 1/08/2026

Why does AI writing seem shallow? An increasing number of videos and articles address this question. However, strangely enough, many of these take the form of "asking the AI ​​why and then presenting its answer." This paper explores the paradox inherent in this structure itself. Why does the narrative of the "critic" ultimately take on an AI-like flatness? Through everyday metaphors and concrete examples, we clarify the crucial difference between speaking one's thoughts and borrowing them.

Keywords
AI writing, fun, narrator, responsibility, criticism
A structure that makes you nod in agreement.
When you play the video, the narrative is smooth.

The video begins with, "The reasons why AI writing is boring are..." and lists several points. They lack specificity, lack experience, and lack surprise. All of them seem plausible. The viewer nods with relief. They feel like they're on the same side.

But then, a sense of discomfort arises.

What's striking is that most of what's being said is presented in the form of, "When I asked the AI ​​why, this is what it said." Criticism is sharp, but no one wields the blade.

The person reading the recipe and the person serving the food
Imagine a cooking show.

The host explains "what makes a dish delicious"—the heat, the aroma, the presentation. But in reality, he's simply reading from a script; he never touches a knife or a pot.

For a moment, viewers feel like they've gotten closer to the world of cooking. But their stomachs remain empty. That's because there are no "traces of cooking."

Many commentators on AI-generated texts take a similar stance. They're talking about analysis, but the process that produced that analysis is off-screen.

The narrative safe zone
The important thing here isn't the accuracy of the content.

The issue is the position.

Ask the AI, organize its answers, and discuss its limitations. This approach has almost no chance of going awry. Even if the comment is wrong, the story ends with "The AI ​​said so." The narrator's own judgment is never called into question.

Borrowing judgment = No responsibility for the narrative arises.

As a result, the narrative is well-structured but lacks warmth. This is because it lacks anyone's mistakes, bets, or hurt.

The contradiction of talking about "interestingness"

What makes a piece of writing interesting?

Many people would answer "unexpected," "specific," or "personal." In other words, it contains the potential for being wrong, biased, or misunderstood.

However, many commentaries on the limitations of AI thoroughly avoid its dangers. They are essentially talking about its "lack of danger" from a safe distance.

Absence of danger × Perfect explanation = Flat acceptance
Convincing, but not memorable.

AI as a mirror

Ironically, this structure is a good reflection of AI itself.

AI organizes vast amounts of language and returns explanations that seem correct. But there's no risk in that, no wrong decision.

Asking the AI ​​for its reasons and proudly reciting its answers may seem like critiquing the AI, but in fact it's just mirroring the AI's way of speaking. And so the narrative, too, becomes thin.

A point from which there's no escape
At this point, the question narrows to one.

AI writing isn't thin because it doesn't have enough words.

To avoid thinning, what's needed isn't a correct answer, but the determination to take on the responsibility.

The more explanations are given that are incomprehensible, the more transparent the narrator becomes.

And once a narrative becomes transparent, no matter how well-crafted it is, it won't be interesting.

From this point onwards, there's no escape.

Interesting isn't in the skill of the narrative, but in the "weight of responsibility" that remains after it's spoken.

Comments