Why Project Estimates Fail: Lessons from a Systems Thinker’s Lens

Inspired by The work is never just “the work” by Dave Stewart

“Even a detailed estimate of ‘the work’ can miss the dark matter that makes up the majority of a project’s real effort.”

When it comes to project management—especially in software and creative work—most of us have lived through the agony of missed deadlines and ballooning timelines. It’s tempting to blame bad luck, moving goalposts, or simple optimism. But as Dave Stewart reveals, there’s a more systemic, and ultimately more instructive, explanation.

Let’s step back and see the big picture—the “systems view”—and discover why underestimation isn’t just a personal failing, but a deeply-rooted feature of how complex projects function.


The Invisible System: Why “The Work” is Just the Tip of the Iceberg

Stewart’s article provides a hard-won confession: after a year-long project went wildly off course, he realized the effort spent on “the work” (i.e., coding, designing, building) was just a fraction of the total investment. The majority was spent on what he calls the “work around the work”—from setup and research, to iteration, firefighting, and post-launch support.

From a systems thinker’s standpoint, this is a textbook example of the planning fallacy—a cognitive bias where we underestimate complexity by focusing on visible tasks and ignoring the web of dependencies and uncertainty that surrounds every project.

Mapping the Project Ecosystem

What Stewart does beautifully is name and map the categories of hidden labor:

  • Preparation: Infrastructure, setup, initial research
  • Acquisition: Scoping, pitching, client meetings
  • Iteration: Debugging, refactoring, ongoing improvements
  • Support: Deployment, updates, ongoing fixes
  • The Unexpected: Surprises, scope creep, disasters

By visualizing the project as an ecosystem—where “the work” is only one node among many—he demonstrates a key principle of systems thinking: emergent complexity. Each category adds not just linear effort, but amplifies feedback loops (delays, misunderstandings, unexpected roadblocks) that make estimation so hazardous.


Patterns and Implications

A systems lens reveals several recurring patterns:

  • Invisible Feedback Loops: Tasks outside “the work” (meetings, reviews, firefighting) generate new work, shifting priorities and resource allocation—often without being tracked or acknowledged.
  • Nonlinear Impact: Small “invisible” tasks, left unaccounted for, aggregate into substantial overruns. Like dark matter, their presence is felt even if they remain unseen.
  • Optimism Bias Is Systemic: Most teams and individuals underestimate not out of ignorance, but because our brains and organizational structures reward “happy path” thinking.
  • Every Project Is a Living System: Changing one part (e.g., a delayed client feedback loop) can ripple through the whole system, derailing even the most detailed plan.

Designing for Reality, Not Idealism

The key takeaway for systems thinkers is awareness and intentional design:

  1. Model the Whole System: During estimation, explicitly map out all “nodes”—not just core deliverables but supporting, enabling, and maintaining tasks.
  2. Quantify Uncertainty: Use multipliers, ranges, and postmortems to factor in the “dark matter” of invisible work.
  3. Surface Assumptions: Name and question the implicit beliefs behind every estimate (e.g., “the client will provide feedback within 24 hours”—will they, really?).
  4. Iterate the System: Treat your estimation process itself as a system to be improved, not a static formula.

Actionable Insights for the Systems Thinker

  • Create a “Work Ecosystem Map” for each new project, labeling categories like preparation, acquisition, iteration, support, and surprises.
  • Hold Team Retrospectives focused not just on deliverables but on the “meta-work” that surrounded them—what did we miss? What new loops emerged?
  • Educate Stakeholders: Share frameworks like Stewart’s to align expectations and build organizational literacy around hidden work.
  • Measure, Don’t Assume: Use real project data to tune your own multipliers and assumptions over time.

Final Thought

Projects are living systems, not checklists. By recognizing the invisible forces at play, we empower ourselves (and our teams) to design more resilient processes, set realistic expectations, and—just maybe—find more satisfaction in the work itself.

“The work is never just the work. It’s everything else—unseen, unsung, but absolutely essential.”


Further Reading:
Dive into the original article: The work is never just “the work”
Reflect on the planning fallacy: Wikipedia – Planning Fallacy
Explore systems thinking: Donella Meadows – Thinking in Systems

Autonomy vs. Reliability: Why AI Agents Still Need a Human Touch

A lot of folks are betting big on AI agents transforming the way we work in 2025. I get the excitement—I’ve spent the last year elbow-deep in building these things myself. But if you’ve ever tried to get an agent past the demo stage and into real production, you know the story is a lot more complicated. My friend Utkarsh Kanwat recently shared his perspective in Why I’m Betting Against AI Agents in 2025 (Despite Building Them), and honestly, it feels like he’s writing from inside my own Slack DMs.

The first thing nobody warns you about? The reliability wall. It’s brutal. I can’t tell you how many times I’ve watched a promising multi-step agent fall apart simply because little errors stack up. Even if your system nails 95% reliability per step—a tall order!—your 20-step workflow is only going to succeed about a third of the time. That’s not a bug in your code, or a limitation of your LLM. That’s just how probability works. The systems that actually make it to production? They keep things short, simple, and put a human in the loop for anything critical.

And here’s another thing most people overlook: the economics of context. People love the idea of a super-smart, chatty agent that remembers everything. In practice, that kind of long, back-and-forth conversation chews through tokens—and your budget. Utkarsh breaks down the math: get to 100 conversational turns, and you’re suddenly spending $50–$100 per session. Nobody’s business model survives that kind of burn at scale. The tools that actually last are the ones that do a focused job, stateless, and move on.

But the biggest gap between the hype and reality is what goes into actually shipping these systems. Here’s the truth: the AI does maybe 30% of the work. The rest is classic engineering—designing error handling, building feedback that makes sense to a machine, integrating with a mess of legacy systems and APIs that never behave quite like the docs say they should. Most of my effort isn’t even “AI work”—it’s just what it takes to make any production system robust.

So if you’re wondering where AI agents really fit in right now, here’s my take: The best ones are like hyper-competent assistants. They handle the heavy lifting on the complicated stuff, but leave final calls and messy decisions to humans or to really solid, deterministic code. The folks chasing end-to-end autonomy are, in my experience, setting themselves up for a lot of headaches—mostly because reality refuses to be as neat as the demo.

If you’re thinking about building or adopting AI agents, seriously, check out Utkarsh’s article. It’s a straight-shooting look at what actually works (and what just looks shiny on stage). There’s a lot of potential here, but it only pays off when we design for the world as it is—not the world we wish we had.

Amplified, Not Replaced: A Veteran Engineer’s Take on Coding’s Uncertain Future

As someone who’s weathered tech cycles, scaled legacy systems, and mentored more than a few generations of engineers, I find myself returning to a recent essay by Jonathan Hoyt: “The Uncertain Future of Coding Careers and Why I’m Still Hopeful”. Hoyt’s piece feels timely—addressing, with candor and humility, the growing sense of anxiety many in our profession feel as AI rapidly transforms the software landscape.

Hoyt’s narrative opens with a conversation familiar to any experienced lead or architect: a junior developer questioning whether they’ve chosen a doomed career. It’s a concern that echoes through countless engineering Slack channels in the wake of high-profile tech layoffs and the visible rise of AI tools like GitHub Copilot. Even for those of us long in the tooth, Hoyt admits, it’s tempting to wonder if we’re on the verge of obsolescence.

But what makes Hoyt’s perspective refreshing—especially for those further along in their careers—is the pivot from fear to agency. He reframes AI, not as an existential threat, but as an amplifier of human ingenuity. For senior engineers and system architects, this means our most valuable skills are not rote implementation or brute-force debugging, but context-building, system design, and the ability to ask the right questions. As Hoyt puts it, the real work becomes guiding the machines, curating and contextualizing knowledge, and ultimately shepherding both code and colleagues into new creative territory.

The essay’s most resonant point for experienced professionals is the call to continuous reinvention. Hoyt writes about treating obsolescence as a kind of internal challenge—constantly working to automate yourself out of your current role, so you’re always prepared to step into the next. For architects, this means doubling down on mentorship, sharing knowledge freely, and contributing to the collective “shared brain” of the industry—be it through open source, internal documentation, or just helping the next engineer up the ladder.

Hoyt’s post doesn’t sugarcoat the uncertainty ahead. The routine entry points into the field are shifting, and not everyone will find the transition easy. Yet, he argues, the need for creative, context-aware technologists will only grow. If AI takes on the repetitive work, our opportunity is to spend more time on invention, strategy, and the high-leverage decisions that shape not just projects, but organizations.

If you’ve spent your career worrying that you might be automated out of relevance, Hoyt’s essay offers both a challenge and a comfort. It’s a reminder that the future of programming isn’t about competing with machines, but learning to be amplified by them—and ensuring we’re always building, learning, and sharing in ways that move the whole field forward.

For anyone in a senior engineering or system architecture role, Jonathan Hoyt’s original piece is essential reading. It doesn’t just address the fears of those just starting out; it offers a vision of hope and practical action for those of us guiding teams—and the next generation—through the shifting sands of technology.

LLMs Today: What’s Really New, and What’s Just Polished?

If you follow AI, you know the story: every few months, a new language model drops with more parameters and splashier headlines. But as Sebastian Raschka highlights in “The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design,” the biggest lesson from this new wave of open-source LLMs is how much has not fundamentally changed. Underneath it all, the progress is less about radical reinvention and more about clever architectural tweaks—optimizing memory, attention, and stability to make bigger, faster, and more efficient models.

At the core, the 2017 transformer blueprint is still powering everything. What’s new? A handful of impactful upgrades:

  • Smarter attention (like Multi-Head Latent Attention and Grouped-Query Attention) slashes memory requirements.
  • Mixture-of-Experts (MoE) lets trillion-parameter giants run without melting your GPU by only activating a fraction of the network at a time.
  • Sliding window attention makes long contexts feasible without hogging resources.
  • Normalization tricks (RMSNorm, Post-Norm, etc.) are now essential for training stability at scale.

Today’s best open models—DeepSeek, Kimi, Llama 4, Gemma, OLMo 2, Qwen3—are all remixing these tools. The differences are in the fine print, not the fundamentals.

But what about OpenAI’s GPT-4/4o or Anthropic’s Claude 3.5? While the specifics are secret, it’s a safe bet their architectures look similar: transformer backbone, MoE scaling, memory-efficient attention, plus their own proprietary speed and safety hacks. Their big edge is polish, robust APIs, multimodal support, and extra safety layers—perfect if you need instant results and strong guardrails.

So, which should you pick?

  • Want transparency, customization, or on-prem deployment? Open models like OLMo 2, Qwen3, or Gemma 3 have you covered.
  • Building for research or scale (and have massive compute)? Try DeepSeek or Kimi K2.
  • Need to serve millions, fast? Lighter models like Mistral Small or Gemma 3n are your friend.
  • If you want the “it just works” experience with best-in-class safety and features, OpenAI and Anthropic are still top choices—just expect less control and no deep customization.

In the end, all the excitement is really about optimization, not paradigm shifts. Progress now means making LLMs faster, stabler, and easier to use. Or as Raschka puts it: “Despite all the tweaks and headline-grabbing parameters, we’re still standing on the same transformer foundation—progress comes from tuning the architecture, not tearing it down.”

If you want the deep technical dive, read Raschka’s full “The Big LLM Architecture Comparison.” Otherwise, just remember: the transformer era isn’t over—it’s just getting a whole lot more interesting.

Beyond the Scroll: How Magazine Publishers Can Reclaim the Reader’s Mind

“Recognize that not everything with a pastel icon and a ping is there for your benefit.”
— BoSacks


The Dilemma: Competing with the Slot Machine

Once, the publisher’s task was daunting but clear: deliver stories, images, and ideas that made readers linger. Today, it’s like trying to read poetry in the middle of a casino. The pings, scrolls, and algorithmic lures of Big Tech have reduced attention to a commodity—and readers themselves to “users,” tugged endlessly by invisible levers.

As BoSacks warns, the game is rigged: every feature of modern platforms is built to keep us hooked, our focus fractured, and our time for sale. In such a world, magazine publishers could be forgiven for feeling obsolete. But in truth, it’s in this chaos that the publisher’s mission is needed most.

The Magazine’s Legacy: More Than Content

Magazines have always offered something different: not just information, but context; not just images, but experiences. The weight of a well-made issue in hand, the rhythm of page after page, the immersive escape of a story told with intention—these are not relics. They’re the antidote.

When every digital platform feels like an endless scroll, what readers crave—whether they realize it or not—is a place to pause, reflect, and connect more deeply. Magazine publishers don’t need to join the attention rat race. They need to double down on what they already do best.

Turning Crisis Into Opportunity

From Fracture to Focus

It starts with the courage to do less, but do it better. In a world where infinite content is the problem, curation is the solution. Publishers can choose to publish fewer stories, but make each one count—well-researched features, slow journalism, and essays that reward more than a passing glance.

Remember, print magazines thrived not because they were fast, but because they were focused. That’s a lesson worth reviving online. Replace infinite scroll with a finite, carefully-crafted issue. Highlight narrative arcs, not just clickable headlines. Treat every digital edition as a destination, not a distraction.

Redesigning for Reflection

Design is more than aesthetics; it’s psychology. Digital spaces don’t have to mimic the anxiety of the feed. Publishers can create “distraction-free” reading modes, slow down the pace, and signal to readers that this is a place for focus. Subtle cues—a clear beginning and end, less clutter, fewer pop-ups—can turn a screen into a sanctuary.

Rebuilding Relationships

The era of the faceless “user” is over. Magazine readers are a community—curious, discerning, and seeking more than just a dopamine rush. Publishers can rekindle relationships through thoughtful engagement: host live events, invite readers behind the scenes, and foster real dialogue in spaces designed for slow conversation, not viral outrage.

Editorials can take the lead, naming the manipulations of Big Tech and offering tools for digital well-being. By being transparent—about ads, data, and editorial process—publishers can offer the kind of trust that algorithms never will.

Advocacy and Innovation

Now is the moment for publishers to become champions of digital wellness. Imagine a future where magazines are at the center of teaching digital literacy, collaborating with educators and wellness experts, and pushing for ethical standards in tech. Instead of chasing engagement, imagine building loyalty and membership around genuine value—offering exclusive, ad-free experiences or print editions that reward commitment, not compulsive behavior.

The Publisher’s Challenge—and Invitation

BoSacks ends his essay on a note of hard-earned hope. “Maybe one of you will read this, pause, and put the damn phone down for five minutes. That’d be a start.” Publishers can do more: you can give readers a reason to linger, to think, to be human again—even if just for a few pages, or a few precious minutes.

The question is not how to keep up with the scroll, but how to lead readers out of it.


Publisher’s Checklist: Reclaiming the Reader’s Mind

  • Publish with intention: Focus on quality over quantity—feature fewer, deeper stories.
  • Design for attention: Offer clean, distraction-free reading experiences, online and off.
  • Reframe the reader relationship: Treat readers as community, not users; foster dialogue, not just clicks.
  • Educate and advocate: Use your platform to teach digital literacy and call out manipulative tech.
  • Champion wellness: Partner with wellness experts; offer guidance on screen time and mindful media.
  • Innovate value models: Build membership and loyalty around substance, not addictive mechanics.

Editorial Strategy Guide

  1. Editorial Calendar:
    Schedule quarterly “deep issues” focused on themes that reward long-form, investigative, or narrative journalism. Balance topicality with timelessness.
  2. Digital Experience:
    Develop a “reading room” digital section—minimal UI, no autoplay, and a clear beginning and end to each story or issue.
    Offer print or printable PDF versions for offline consumption.
  3. Community Engagement:
    Launch slow forums, member-only Q\&As, or periodic live discussions that prioritize depth over volume.
  4. Content Mix:
    Add regular columns on digital wellness, attention, and the psychology of media. Bring in guest experts and voices from education, mental health, and technology.
  5. Revenue and Partnerships:
    Prioritize partnerships and sponsors aligned with well-being, education, or the arts. Experiment with reader-supported models (memberships, donations, exclusive access) that reinforce your mission.

Remember:
Your greatest strength as a publisher is not speed, but significance. In an age of distraction, offering depth, focus, and meaning is an act of leadership. The world doesn’t need another feed; it needs a place to think.

Daily Links: Monday, Jul 21st, 2025

Hey there! In my latest blog post, I’m sharing a peek at my ultimate self-hosting setup, unraveling the myths about asynchrony and concurrency, and I’m kickstarting a DIY series on creating your own backup system, emphasizing the strategy before diving into scripts. It’s an exciting mix of tech strategies you won’t want to miss!

Here’s What Science Really Says About Getting Fit and Staying Healthy

Let’s be real—fitness advice online is a total mess. Everyone’s shouting conflicting tips, and it’s impossible to know what actually works. That’s why I loved this article that breaks down the real, science-backed stuff about exercise and health—no hype, no BS.

Here’s the gist:

  • Getting visible muscles and changes takes time. Like, months. Not days or weeks.
  • Forget “toning.” It’s just building muscle and losing fat—there’s no magic trick.
  • Cardio and strength training? You need both. Your heart and muscles will thank you.
  • If you’re super busy, HIIT workouts (short, brutal bursts) can actually get you pretty fit without eating your whole day.

The best part? You don’t have to be perfect or an athlete. Just move regularly and put in some effort. That’s really it. Your future self (and those jeans you want to fit better) will be so grateful.

Check it out here: https://bakadesuyo.com/2025/07/exercising/

Sandcastles, Spaghetti, and Software: Surviving the Chaos of AI-First Coding

What happens when software development gets upended by AI? According to Scott Werner, we’re all improvising—and that might be the only honest answer.


The Wild West of AI-Driven Development

Scott Werner’s essay, Nobody Knows How To Build With AI Yet, is a must-read snapshot of the modern AI development experience: exhilarating, uncertain, and utterly unpredictable. Werner invites us behind the scenes as he builds “Protocollie” in four days—not by mastering every detail, but by collaborating with an AI and improvising each step.

The most striking realization? There is no longer a set path or best practice for building with AI. What worked yesterday may already be obsolete. Documentation, if it exists, is more like a time capsule of one developer’s process than a blueprint for others.


Expertise: Out the Window

Werner challenges the myth of expertise in this era. In a world where AI tools, workflows, and even “the rules” change every few weeks, everyone—veterans and newcomers alike—is a beginner. “We’re all making this up as we go,” he writes, with a mix of humility and thrill.

His “four-document system,” cobbled together out of necessity rather than design, illustrates this point. The documents aren’t definitive; they’re just artifacts of one experiment, already nostalgic by the time they’re published. What matters isn’t following a set of instructions, but being willing to experiment, iterate, and leave markers for yourself (and maybe others) along the way.


Sandcastles at Low Tide

The essay’s strength lies in its metaphors: AI development is like jazz improvisation, sandcastle building, or throwing spaghetti at the wall. The sticking isn’t what matters—the act of throwing is. In this landscape, the most valuable skill isn’t syntax or architecture, but the ability to express clear intent, communicate with your AI “pair programmer,” and let go of any illusion of permanence.

Documentation? Less a how-to manual, more a set of “messages to future confused versions of ourselves.”


Why You Should Read This

If you’re a developer, technical lead, or just someone curious about how AI is changing work, Werner’s reflections will resonate. They offer a mix of hard-won wisdom, permission to experiment, and an honest look at the anxiety and exhilaration of the new normal.

Memorable Line:
“Maybe all methodology is just mutually agreed-upon fiction that happens to produce results.”


Want to Go Deeper?

  • What does it mean to be an “expert” when tools and workflows change weekly?
  • How can we create documentation that helps others without pretending to have all the answers?
  • What risks and rewards come with building “sandcastles at low tide”?

Read the original essay if you want to see what it really feels like on the front lines of AI-powered creation. And then? Make your own trail markers—someone else might find them, or you might just need them yourself next week.


Who should read this?
Developers, tech leaders, innovation managers, and anyone excited (or a bit terrified) by the rapid evolution of AI in software.

A Homelab Perspective on Backup Strategy

“Data must always be restorable (and as quickly as possible), in an open format, and consistent.”
— Stefano Marinelli

Why Your Homelab Needs More Than Just “Copies”

If you’re running a homelab—whether it’s for learning, hosting services, or managing family data—you’ve probably told yourself “I’ll back it up later” or “I’ve got my files on another disk, so I’m safe.” But after reading Stefano Marinelli’s “Make Your Own Backup System – Part 1: Strategy Before Scripts,” it’s clear that many of us (myself included!) have been lulled into a false sense of security by confusing “backups” with mere file copies.

Marinelli’s core message?
True backup starts with a plan, not with scripts, disks, or the latest cloud storage.

Key Takeaways for Homelabbers

  • Plan First, Script Later:
    Don’t just whip up a cron job to rsync your /home directory. Start by asking: What do you really need to protect? How much downtime can you live with if something breaks? Where should your most precious data actually live?
  • Full Disk vs. File Backups:
    Do you back up the entire drive (system and all), or just the irreplaceable stuff? Full disk images are great for quick, all-in-one restores—especially for VMs—but can eat up tons of space. File-level backups (using rsync, tar, etc.) give you granularity, but restoring a borked system is way harder unless you know exactly what you’re doing.
  • Snapshots Are Essential:
    Filesystems like ZFS and BTRFS aren’t just for big enterprise setups—they’re your friend! Snapshots freeze your data at a specific point, so you’re not backing up half-written databases or files mid-change. This is the difference between a backup that works and one that silently fails.
  • Push or Pull?
    Marinelli makes a strong case for the “pull” model: your backup server fetches data from your machines, not the other way around. This means your main server never has to open up ports or risk exposure, and you keep one central point for management and restores.
  • Own Your Data:
    The article strongly advocates for keeping backups out of the “big tech” cloud. For homelabbers, that resonates—part of the homelab spirit is self-reliance and not being beholden to someone else’s infrastructure or fine print.

What’s Missing (and What to Ask Next)

Marinelli doesn’t dive (yet) into the weeds of scripting, automation, or how much this might cost you in hardware and time. He’s laser-focused on strategy—which is honestly what most homelabbers skip in their rush to install the next shiny tool.

But if you’re like me, you might be wondering:

  • How do I automate snapshot-based backups in a way that’s easy to restore?
  • What’s the best way to test that my backups actually work—without nuking my main system?
  • Are there open-source tools that make “pull” backups easier for a home environment?
  • What’s the smartest way to mix local and “cloudy” (maybe self-hosted) offsite storage?

Should You Read the Original?

If you’re running a homelab—whether you’ve got a single Raspberry Pi or a rack of old enterprise gear—you owe it to yourself (and your data) to rethink how you do backups. Marinelli’s post is a reminder that strategy trumps technology. The how-tos are coming in his later posts, but even as a stand-alone, this first part is pure gold for anyone who wants to sleep better at night knowing their family photos, media libraries, or home services are safe.

Final Word

Don’t wait for disaster to figure out if your backups work. Start with a plan, learn the difference between copying and true backups, and—most of all—make sure you can restore what you care about, when it matters most.

In Defense of Sharing AI Output: Why “AI Slop” Isn’t the End of Meaningful Communication

Rethinking proof-of-thought, noise, and the upside of a more open AI culture.


Is sharing ChatGPT output really so rude?
A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt it as your own or have explicit consent from the recipient.

It’s a provocative take, inspired by Peter Watts’ Blindsight, and it raises important questions about authenticity, value, and digital trust. But does it go too far? Is all AI-generated text “slop”? Is every forward or paste a violation of etiquette?

Let’s consider another perspective—one that recognizes the risks but also sees the immense value and potential of a world where AI-generated output is more freely shared.

“Proof-of-Thought” Was Always a Mirage

The essay’s nostalgia for a lost era of “proof-of-thought” is understandable. But let’s be honest: not every piece of human writing was ever insightful, intentional, or even useful. Spam, boilerplate, PR releases, and perfunctory office emails have existed for decades—long before AI.
Authenticity and attention have always required discernment, not just faith in the medium.

AI may have made text cheap, but it has also made ideas more accessible and the barriers to entry lower. That’s not a bug—it’s a feature.

Sharing AI Output: Consent, Context, and Creativity

Of course, etiquette matters. But to frame sharing AI text as inherently rude or even hostile misses some crucial points:

  • AI output can be informative, creative, and valuable in its raw form. Sometimes a bot’s phrasing or approach offers a new angle, and sharing that output can accelerate understanding, brainstorming, or problem-solving.
  • Explicit adoption isn’t always practical. If I ask ChatGPT to summarize a dense technical paper or translate a snippet of code, sometimes the fastest, most honest way to help a friend or colleague is to share that result directly—with attribution.
  • Consent can be implicit in many contexts. In tech, research, and online forums, sharing logs, code snippets, or even entire AI chats is often expected and welcomed—especially when transparency and reproducibility are important.

The Upside of “AI Slop”: Accessibility, Efficiency, and Learning

What the “anti-slop” argument underplays is just how much AI has democratized expertise and lowered the cost of curiosity:

  • Non-native speakers can get better drafts or translations instantly.
  • Students and self-learners can access tailored explanations without waiting for a human expert.
  • Developers and researchers can rapidly prototype, debug, and collaborate with a global community, often using AI-generated code or documentation as a starting point.

Yes, there’s more noise. But there’s also far more signal for many people who were previously shut out of certain conversations.

Trust and Transparency, Not Gatekeeping

Rather than discouraging the sharing of AI output, we should focus on transparency. Label AI-generated text clearly. Foster norms where context—why, how, and for whom AI was used—is always provided. Give people the choice and the tools to ignore or engage as they see fit.

Blanket prohibitions or shame about sharing AI content risk re-erecting barriers we’ve only just started to dismantle.

Questions for the Future

  • How do we build systems that help us filter valuable AI output from true “slop”?
  • What new forms of collaborative authorship—human + AI—will emerge, and how do we credit them?
  • How can we leverage AI to reduce noise, not just add to it?

A Call for a More Open, Nuanced AI Etiquette

AI is here to stay, and its output will only become more sophisticated and pervasive. The solution isn’t to retreat or treat all shared AI text as digital poison. It’s to develop a culture of honesty, clarity, and context—so that AI can amplify, rather than degrade, our collective intelligence.

So yes: share your ChatGPT output—just tell me where it came from. Let’s make etiquette about agency, not anxiety.