Why Project Estimates Fail: Lessons from a Systems Thinker’s Lens

Inspired by The work is never just “the work” by Dave Stewart

“Even a detailed estimate of ‘the work’ can miss the dark matter that makes up the majority of a project’s real effort.”

When it comes to project management—especially in software and creative work—most of us have lived through the agony of missed deadlines and ballooning timelines. It’s tempting to blame bad luck, moving goalposts, or simple optimism. But as Dave Stewart reveals, there’s a more systemic, and ultimately more instructive, explanation.

Let’s step back and see the big picture—the “systems view”—and discover why underestimation isn’t just a personal failing, but a deeply-rooted feature of how complex projects function.


The Invisible System: Why “The Work” is Just the Tip of the Iceberg

Stewart’s article provides a hard-won confession: after a year-long project went wildly off course, he realized the effort spent on “the work” (i.e., coding, designing, building) was just a fraction of the total investment. The majority was spent on what he calls the “work around the work”—from setup and research, to iteration, firefighting, and post-launch support.

From a systems thinker’s standpoint, this is a textbook example of the planning fallacy—a cognitive bias where we underestimate complexity by focusing on visible tasks and ignoring the web of dependencies and uncertainty that surrounds every project.

Mapping the Project Ecosystem

What Stewart does beautifully is name and map the categories of hidden labor:

  • Preparation: Infrastructure, setup, initial research
  • Acquisition: Scoping, pitching, client meetings
  • Iteration: Debugging, refactoring, ongoing improvements
  • Support: Deployment, updates, ongoing fixes
  • The Unexpected: Surprises, scope creep, disasters

By visualizing the project as an ecosystem—where “the work” is only one node among many—he demonstrates a key principle of systems thinking: emergent complexity. Each category adds not just linear effort, but amplifies feedback loops (delays, misunderstandings, unexpected roadblocks) that make estimation so hazardous.


Patterns and Implications

A systems lens reveals several recurring patterns:

  • Invisible Feedback Loops: Tasks outside “the work” (meetings, reviews, firefighting) generate new work, shifting priorities and resource allocation—often without being tracked or acknowledged.
  • Nonlinear Impact: Small “invisible” tasks, left unaccounted for, aggregate into substantial overruns. Like dark matter, their presence is felt even if they remain unseen.
  • Optimism Bias Is Systemic: Most teams and individuals underestimate not out of ignorance, but because our brains and organizational structures reward “happy path” thinking.
  • Every Project Is a Living System: Changing one part (e.g., a delayed client feedback loop) can ripple through the whole system, derailing even the most detailed plan.

Designing for Reality, Not Idealism

The key takeaway for systems thinkers is awareness and intentional design:

  1. Model the Whole System: During estimation, explicitly map out all “nodes”—not just core deliverables but supporting, enabling, and maintaining tasks.
  2. Quantify Uncertainty: Use multipliers, ranges, and postmortems to factor in the “dark matter” of invisible work.
  3. Surface Assumptions: Name and question the implicit beliefs behind every estimate (e.g., “the client will provide feedback within 24 hours”—will they, really?).
  4. Iterate the System: Treat your estimation process itself as a system to be improved, not a static formula.

Actionable Insights for the Systems Thinker

  • Create a “Work Ecosystem Map” for each new project, labeling categories like preparation, acquisition, iteration, support, and surprises.
  • Hold Team Retrospectives focused not just on deliverables but on the “meta-work” that surrounded them—what did we miss? What new loops emerged?
  • Educate Stakeholders: Share frameworks like Stewart’s to align expectations and build organizational literacy around hidden work.
  • Measure, Don’t Assume: Use real project data to tune your own multipliers and assumptions over time.

Final Thought

Projects are living systems, not checklists. By recognizing the invisible forces at play, we empower ourselves (and our teams) to design more resilient processes, set realistic expectations, and—just maybe—find more satisfaction in the work itself.

“The work is never just the work. It’s everything else—unseen, unsung, but absolutely essential.”


Further Reading:
Dive into the original article: The work is never just “the work”
Reflect on the planning fallacy: Wikipedia – Planning Fallacy
Explore systems thinking: Donella Meadows – Thinking in Systems

In Defense of Sharing AI Output: Why “AI Slop” Isn’t the End of Meaningful Communication

Rethinking proof-of-thought, noise, and the upside of a more open AI culture.


Is sharing ChatGPT output really so rude?
A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt it as your own or have explicit consent from the recipient.

It’s a provocative take, inspired by Peter Watts’ Blindsight, and it raises important questions about authenticity, value, and digital trust. But does it go too far? Is all AI-generated text “slop”? Is every forward or paste a violation of etiquette?

Let’s consider another perspective—one that recognizes the risks but also sees the immense value and potential of a world where AI-generated output is more freely shared.

“Proof-of-Thought” Was Always a Mirage

The essay’s nostalgia for a lost era of “proof-of-thought” is understandable. But let’s be honest: not every piece of human writing was ever insightful, intentional, or even useful. Spam, boilerplate, PR releases, and perfunctory office emails have existed for decades—long before AI.
Authenticity and attention have always required discernment, not just faith in the medium.

AI may have made text cheap, but it has also made ideas more accessible and the barriers to entry lower. That’s not a bug—it’s a feature.

Sharing AI Output: Consent, Context, and Creativity

Of course, etiquette matters. But to frame sharing AI text as inherently rude or even hostile misses some crucial points:

  • AI output can be informative, creative, and valuable in its raw form. Sometimes a bot’s phrasing or approach offers a new angle, and sharing that output can accelerate understanding, brainstorming, or problem-solving.
  • Explicit adoption isn’t always practical. If I ask ChatGPT to summarize a dense technical paper or translate a snippet of code, sometimes the fastest, most honest way to help a friend or colleague is to share that result directly—with attribution.
  • Consent can be implicit in many contexts. In tech, research, and online forums, sharing logs, code snippets, or even entire AI chats is often expected and welcomed—especially when transparency and reproducibility are important.

The Upside of “AI Slop”: Accessibility, Efficiency, and Learning

What the “anti-slop” argument underplays is just how much AI has democratized expertise and lowered the cost of curiosity:

  • Non-native speakers can get better drafts or translations instantly.
  • Students and self-learners can access tailored explanations without waiting for a human expert.
  • Developers and researchers can rapidly prototype, debug, and collaborate with a global community, often using AI-generated code or documentation as a starting point.

Yes, there’s more noise. But there’s also far more signal for many people who were previously shut out of certain conversations.

Trust and Transparency, Not Gatekeeping

Rather than discouraging the sharing of AI output, we should focus on transparency. Label AI-generated text clearly. Foster norms where context—why, how, and for whom AI was used—is always provided. Give people the choice and the tools to ignore or engage as they see fit.

Blanket prohibitions or shame about sharing AI content risk re-erecting barriers we’ve only just started to dismantle.

Questions for the Future

  • How do we build systems that help us filter valuable AI output from true “slop”?
  • What new forms of collaborative authorship—human + AI—will emerge, and how do we credit them?
  • How can we leverage AI to reduce noise, not just add to it?

A Call for a More Open, Nuanced AI Etiquette

AI is here to stay, and its output will only become more sophisticated and pervasive. The solution isn’t to retreat or treat all shared AI text as digital poison. It’s to develop a culture of honesty, clarity, and context—so that AI can amplify, rather than degrade, our collective intelligence.

So yes: share your ChatGPT output—just tell me where it came from. Let’s make etiquette about agency, not anxiety.

This is Ludicrous – DeSantis wants to make us dumb

LINK (go ahead, read it, I’ll wait): Florida school district removes dictionaries from libraries, citing law championed by DeSantis

Do you know why they (conservatives/republicans) want to remove dictionaries (and other books)? (HINT: it’s not about protecting our youth)

They (conservatives/republicans) want to make us dumb.

They (conservatives/republicans) don’t want us to be able to question things.

They (conservatives/republicans) want to be able to tell us what things means.

They (conservatives/republicans) are scared an educated and informed public would see right through them.

Aside: I highly suggest subscribing to the Popular Information newsletter; just understand it will make your blood boil at times.