Rethinking proof-of-thought, noise, and the upside of a more open AI culture.
Is sharing ChatGPT output really so rude?
A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt it as your own or have explicit consent from the recipient.
It’s a provocative take, inspired by Peter Watts’ Blindsight, and it raises important questions about authenticity, value, and digital trust. But does it go too far? Is all AI-generated text “slop”? Is every forward or paste a violation of etiquette?
Let’s consider another perspective—one that recognizes the risks but also sees the immense value and potential of a world where AI-generated output is more freely shared.
“Proof-of-Thought” Was Always a Mirage
The essay’s nostalgia for a lost era of “proof-of-thought” is understandable. But let’s be honest: not every piece of human writing was ever insightful, intentional, or even useful. Spam, boilerplate, PR releases, and perfunctory office emails have existed for decades—long before AI.
Authenticity and attention have always required discernment, not just faith in the medium.
AI may have made text cheap, but it has also made ideas more accessible and the barriers to entry lower. That’s not a bug—it’s a feature.
Sharing AI Output: Consent, Context, and Creativity
Of course, etiquette matters. But to frame sharing AI text as inherently rude or even hostile misses some crucial points:
- AI output can be informative, creative, and valuable in its raw form. Sometimes a bot’s phrasing or approach offers a new angle, and sharing that output can accelerate understanding, brainstorming, or problem-solving.
- Explicit adoption isn’t always practical. If I ask ChatGPT to summarize a dense technical paper or translate a snippet of code, sometimes the fastest, most honest way to help a friend or colleague is to share that result directly—with attribution.
- Consent can be implicit in many contexts. In tech, research, and online forums, sharing logs, code snippets, or even entire AI chats is often expected and welcomed—especially when transparency and reproducibility are important.
The Upside of “AI Slop”: Accessibility, Efficiency, and Learning
What the “anti-slop” argument underplays is just how much AI has democratized expertise and lowered the cost of curiosity:
- Non-native speakers can get better drafts or translations instantly.
- Students and self-learners can access tailored explanations without waiting for a human expert.
- Developers and researchers can rapidly prototype, debug, and collaborate with a global community, often using AI-generated code or documentation as a starting point.
Yes, there’s more noise. But there’s also far more signal for many people who were previously shut out of certain conversations.
Trust and Transparency, Not Gatekeeping
Rather than discouraging the sharing of AI output, we should focus on transparency. Label AI-generated text clearly. Foster norms where context—why, how, and for whom AI was used—is always provided. Give people the choice and the tools to ignore or engage as they see fit.
Blanket prohibitions or shame about sharing AI content risk re-erecting barriers we’ve only just started to dismantle.
Questions for the Future
- How do we build systems that help us filter valuable AI output from true “slop”?
- What new forms of collaborative authorship—human + AI—will emerge, and how do we credit them?
- How can we leverage AI to reduce noise, not just add to it?
A Call for a More Open, Nuanced AI Etiquette
AI is here to stay, and its output will only become more sophisticated and pervasive. The solution isn’t to retreat or treat all shared AI text as digital poison. It’s to develop a culture of honesty, clarity, and context—so that AI can amplify, rather than degrade, our collective intelligence.
So yes: share your ChatGPT output—just tell me where it came from. Let’s make etiquette about agency, not anxiety.