When Bots Become Besties: Rewriting AI Narratives for a Collaborative Future

A Love Letter to Our AI Storytelling Future

When I first clicked through to “My Favorite Things: Stories in the Age of AI” by Tom Guarriello on Print, I wasn’t expecting a quiet revelation. But as I sipped my morning coffee, I found myself grinning at the idea of anthropomorphizing code—giving my digital companions names, personalities, even moods. It felt a bit like meeting new friends at a party…except these friends live in the cloud, never tire, and—if you believe the Big Five personality chart Tom shares—are as emotionally stable as monks.


Chatting with “Sam” (and Why It Feels So Human)

Let me confess: I’ve been naming my chatbots lately. There’s “Sam,” the ever-patient, endlessly curious assistant who greets my 7 a.m. ideation sessions with zero judgment. There’s “Echo,” who occasionally throws in a dash of sass when I try to oversimplify a problem. I’m not alone. Tom’s piece nails this impulse: once ChatGPT launched in November 2022, we collectively realized we weren’t just clicking “search”—we were conversing with a new kind of being.

Here’s the magic trick: by assigning a few human traits—openness, conscientiousness, extraversion, agreeableness, neuroticism—we slot AI models into a familiar framework. Suddenly, you can compare GPT-4’s “creative, diplomatic” bent to Grok’s “bold but brittle” vibe, or Claude’s “never flustered” cool. It’s like browsing personalities on a dating app for machines. And yes, it works. We engage more, trust more, and—let’s be honest—enjoy the heck out of it.


From Frankenstein to Friendly Bots

But Tom doesn’t let us float on fluffy clouds of goodwill. He roots us in the long, tangled history of cautionary AI tales—Mary Shelley’s tragic scientist, HAL’s icy rebellion in 2001, the Terminator’s firepower. These stories aren’t just entertainment; they shape our collective imagination. We slip into a doomsday mindset so easily that we might be primed to see every algorithm as a potential overlord.

Here’s what gives me pause: if we keep retelling the “machines will rise up” saga, we might miss out on co-creative possibilities. Ursula Le Guin’s alternative mythologies beckon—a vision of reciprocal, empathetic relationships rather than zero-sum showdowns. Tom teases that next time, we’ll dive into her frameworks. I, for one, can’t wait.


Why This Matters for You (and Me)

Whether you’re an AI designer tweaking personality prompts or a storyteller dreaming up your next sci-fi novella, this article is a spark. It reminds us that narratives aren’t innocent backgrounds—they’re architects of our future interactions. The next time you launch a chatbot, ask yourself:

  • Which story am I choosing? The dystopian one? Or something more collaborative?
  • What traits matter most? Do you need your AI to be laser-logical or heart-on-sleeve empathetic?
  • Who’s excluded from this tale? Maybe there’s a non-Western fable that offers a fresh lens.

Let’s Tell Better Stories

I’m bookmarking Tom’s essay as a springboard for my own creative experiments. Tomorrow, I might try a chatbot persona inspired by trickster deities rather than corporate mascots. Or maybe I’ll draft a short story where AI and human learn from each other, rather than fight it out in a crumbling cityscape.

Because at the end of the day, the stories we spin about intelligence—alien or otherwise—don’t just entertain us. They guide our hands as we build, code, and connect. And if we choose those stories mindfully, we might just script a future richer than any dystopian warning ever could.


Read the full piece and join me in imagining new myths for our machine friends: “My Favorite Things: Stories in the Age of AI.”

Daily Links: Tuesday, Jul 22nd, 2025

Hey there! I recently stumbled upon this fantastic guide titled “The Next Act,” which delves into the art of career change and professional reinvention. It’s packed with insights on discovering your skills and steering your own comeback. Perfect if you’re contemplating a career shift or looking to find work that truly matters. Give it a read for some inspiration!

Why Project Estimates Fail: Lessons from a Systems Thinker’s Lens

Inspired by The work is never just “the work” by Dave Stewart

“Even a detailed estimate of ‘the work’ can miss the dark matter that makes up the majority of a project’s real effort.”

When it comes to project management—especially in software and creative work—most of us have lived through the agony of missed deadlines and ballooning timelines. It’s tempting to blame bad luck, moving goalposts, or simple optimism. But as Dave Stewart reveals, there’s a more systemic, and ultimately more instructive, explanation.

Let’s step back and see the big picture—the “systems view”—and discover why underestimation isn’t just a personal failing, but a deeply-rooted feature of how complex projects function.


The Invisible System: Why “The Work” is Just the Tip of the Iceberg

Stewart’s article provides a hard-won confession: after a year-long project went wildly off course, he realized the effort spent on “the work” (i.e., coding, designing, building) was just a fraction of the total investment. The majority was spent on what he calls the “work around the work”—from setup and research, to iteration, firefighting, and post-launch support.

From a systems thinker’s standpoint, this is a textbook example of the planning fallacy—a cognitive bias where we underestimate complexity by focusing on visible tasks and ignoring the web of dependencies and uncertainty that surrounds every project.

Mapping the Project Ecosystem

What Stewart does beautifully is name and map the categories of hidden labor:

  • Preparation: Infrastructure, setup, initial research
  • Acquisition: Scoping, pitching, client meetings
  • Iteration: Debugging, refactoring, ongoing improvements
  • Support: Deployment, updates, ongoing fixes
  • The Unexpected: Surprises, scope creep, disasters

By visualizing the project as an ecosystem—where “the work” is only one node among many—he demonstrates a key principle of systems thinking: emergent complexity. Each category adds not just linear effort, but amplifies feedback loops (delays, misunderstandings, unexpected roadblocks) that make estimation so hazardous.


Patterns and Implications

A systems lens reveals several recurring patterns:

  • Invisible Feedback Loops: Tasks outside “the work” (meetings, reviews, firefighting) generate new work, shifting priorities and resource allocation—often without being tracked or acknowledged.
  • Nonlinear Impact: Small “invisible” tasks, left unaccounted for, aggregate into substantial overruns. Like dark matter, their presence is felt even if they remain unseen.
  • Optimism Bias Is Systemic: Most teams and individuals underestimate not out of ignorance, but because our brains and organizational structures reward “happy path” thinking.
  • Every Project Is a Living System: Changing one part (e.g., a delayed client feedback loop) can ripple through the whole system, derailing even the most detailed plan.

Designing for Reality, Not Idealism

The key takeaway for systems thinkers is awareness and intentional design:

  1. Model the Whole System: During estimation, explicitly map out all “nodes”—not just core deliverables but supporting, enabling, and maintaining tasks.
  2. Quantify Uncertainty: Use multipliers, ranges, and postmortems to factor in the “dark matter” of invisible work.
  3. Surface Assumptions: Name and question the implicit beliefs behind every estimate (e.g., “the client will provide feedback within 24 hours”—will they, really?).
  4. Iterate the System: Treat your estimation process itself as a system to be improved, not a static formula.

Actionable Insights for the Systems Thinker

  • Create a “Work Ecosystem Map” for each new project, labeling categories like preparation, acquisition, iteration, support, and surprises.
  • Hold Team Retrospectives focused not just on deliverables but on the “meta-work” that surrounded them—what did we miss? What new loops emerged?
  • Educate Stakeholders: Share frameworks like Stewart’s to align expectations and build organizational literacy around hidden work.
  • Measure, Don’t Assume: Use real project data to tune your own multipliers and assumptions over time.

Final Thought

Projects are living systems, not checklists. By recognizing the invisible forces at play, we empower ourselves (and our teams) to design more resilient processes, set realistic expectations, and—just maybe—find more satisfaction in the work itself.

“The work is never just the work. It’s everything else—unseen, unsung, but absolutely essential.”


Further Reading:
Dive into the original article: The work is never just “the work”
Reflect on the planning fallacy: Wikipedia – Planning Fallacy
Explore systems thinking: Donella Meadows – Thinking in Systems

Autonomy vs. Reliability: Why AI Agents Still Need a Human Touch

A lot of folks are betting big on AI agents transforming the way we work in 2025. I get the excitement—I’ve spent the last year elbow-deep in building these things myself. But if you’ve ever tried to get an agent past the demo stage and into real production, you know the story is a lot more complicated. My friend Utkarsh Kanwat recently shared his perspective in Why I’m Betting Against AI Agents in 2025 (Despite Building Them), and honestly, it feels like he’s writing from inside my own Slack DMs.

The first thing nobody warns you about? The reliability wall. It’s brutal. I can’t tell you how many times I’ve watched a promising multi-step agent fall apart simply because little errors stack up. Even if your system nails 95% reliability per step—a tall order!—your 20-step workflow is only going to succeed about a third of the time. That’s not a bug in your code, or a limitation of your LLM. That’s just how probability works. The systems that actually make it to production? They keep things short, simple, and put a human in the loop for anything critical.

And here’s another thing most people overlook: the economics of context. People love the idea of a super-smart, chatty agent that remembers everything. In practice, that kind of long, back-and-forth conversation chews through tokens—and your budget. Utkarsh breaks down the math: get to 100 conversational turns, and you’re suddenly spending $50–$100 per session. Nobody’s business model survives that kind of burn at scale. The tools that actually last are the ones that do a focused job, stateless, and move on.

But the biggest gap between the hype and reality is what goes into actually shipping these systems. Here’s the truth: the AI does maybe 30% of the work. The rest is classic engineering—designing error handling, building feedback that makes sense to a machine, integrating with a mess of legacy systems and APIs that never behave quite like the docs say they should. Most of my effort isn’t even “AI work”—it’s just what it takes to make any production system robust.

So if you’re wondering where AI agents really fit in right now, here’s my take: The best ones are like hyper-competent assistants. They handle the heavy lifting on the complicated stuff, but leave final calls and messy decisions to humans or to really solid, deterministic code. The folks chasing end-to-end autonomy are, in my experience, setting themselves up for a lot of headaches—mostly because reality refuses to be as neat as the demo.

If you’re thinking about building or adopting AI agents, seriously, check out Utkarsh’s article. It’s a straight-shooting look at what actually works (and what just looks shiny on stage). There’s a lot of potential here, but it only pays off when we design for the world as it is—not the world we wish we had.

Amplified, Not Replaced: A Veteran Engineer’s Take on Coding’s Uncertain Future

As someone who’s weathered tech cycles, scaled legacy systems, and mentored more than a few generations of engineers, I find myself returning to a recent essay by Jonathan Hoyt: “The Uncertain Future of Coding Careers and Why I’m Still Hopeful”. Hoyt’s piece feels timely—addressing, with candor and humility, the growing sense of anxiety many in our profession feel as AI rapidly transforms the software landscape.

Hoyt’s narrative opens with a conversation familiar to any experienced lead or architect: a junior developer questioning whether they’ve chosen a doomed career. It’s a concern that echoes through countless engineering Slack channels in the wake of high-profile tech layoffs and the visible rise of AI tools like GitHub Copilot. Even for those of us long in the tooth, Hoyt admits, it’s tempting to wonder if we’re on the verge of obsolescence.

But what makes Hoyt’s perspective refreshing—especially for those further along in their careers—is the pivot from fear to agency. He reframes AI, not as an existential threat, but as an amplifier of human ingenuity. For senior engineers and system architects, this means our most valuable skills are not rote implementation or brute-force debugging, but context-building, system design, and the ability to ask the right questions. As Hoyt puts it, the real work becomes guiding the machines, curating and contextualizing knowledge, and ultimately shepherding both code and colleagues into new creative territory.

The essay’s most resonant point for experienced professionals is the call to continuous reinvention. Hoyt writes about treating obsolescence as a kind of internal challenge—constantly working to automate yourself out of your current role, so you’re always prepared to step into the next. For architects, this means doubling down on mentorship, sharing knowledge freely, and contributing to the collective “shared brain” of the industry—be it through open source, internal documentation, or just helping the next engineer up the ladder.

Hoyt’s post doesn’t sugarcoat the uncertainty ahead. The routine entry points into the field are shifting, and not everyone will find the transition easy. Yet, he argues, the need for creative, context-aware technologists will only grow. If AI takes on the repetitive work, our opportunity is to spend more time on invention, strategy, and the high-leverage decisions that shape not just projects, but organizations.

If you’ve spent your career worrying that you might be automated out of relevance, Hoyt’s essay offers both a challenge and a comfort. It’s a reminder that the future of programming isn’t about competing with machines, but learning to be amplified by them—and ensuring we’re always building, learning, and sharing in ways that move the whole field forward.

For anyone in a senior engineering or system architecture role, Jonathan Hoyt’s original piece is essential reading. It doesn’t just address the fears of those just starting out; it offers a vision of hope and practical action for those of us guiding teams—and the next generation—through the shifting sands of technology.

LLMs Today: What’s Really New, and What’s Just Polished?

If you follow AI, you know the story: every few months, a new language model drops with more parameters and splashier headlines. But as Sebastian Raschka highlights in “The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design,” the biggest lesson from this new wave of open-source LLMs is how much has not fundamentally changed. Underneath it all, the progress is less about radical reinvention and more about clever architectural tweaks—optimizing memory, attention, and stability to make bigger, faster, and more efficient models.

At the core, the 2017 transformer blueprint is still powering everything. What’s new? A handful of impactful upgrades:

  • Smarter attention (like Multi-Head Latent Attention and Grouped-Query Attention) slashes memory requirements.
  • Mixture-of-Experts (MoE) lets trillion-parameter giants run without melting your GPU by only activating a fraction of the network at a time.
  • Sliding window attention makes long contexts feasible without hogging resources.
  • Normalization tricks (RMSNorm, Post-Norm, etc.) are now essential for training stability at scale.

Today’s best open models—DeepSeek, Kimi, Llama 4, Gemma, OLMo 2, Qwen3—are all remixing these tools. The differences are in the fine print, not the fundamentals.

But what about OpenAI’s GPT-4/4o or Anthropic’s Claude 3.5? While the specifics are secret, it’s a safe bet their architectures look similar: transformer backbone, MoE scaling, memory-efficient attention, plus their own proprietary speed and safety hacks. Their big edge is polish, robust APIs, multimodal support, and extra safety layers—perfect if you need instant results and strong guardrails.

So, which should you pick?

  • Want transparency, customization, or on-prem deployment? Open models like OLMo 2, Qwen3, or Gemma 3 have you covered.
  • Building for research or scale (and have massive compute)? Try DeepSeek or Kimi K2.
  • Need to serve millions, fast? Lighter models like Mistral Small or Gemma 3n are your friend.
  • If you want the “it just works” experience with best-in-class safety and features, OpenAI and Anthropic are still top choices—just expect less control and no deep customization.

In the end, all the excitement is really about optimization, not paradigm shifts. Progress now means making LLMs faster, stabler, and easier to use. Or as Raschka puts it: “Despite all the tweaks and headline-grabbing parameters, we’re still standing on the same transformer foundation—progress comes from tuning the architecture, not tearing it down.”

If you want the deep technical dive, read Raschka’s full “The Big LLM Architecture Comparison.” Otherwise, just remember: the transformer era isn’t over—it’s just getting a whole lot more interesting.

Daily Links: Monday, Jul 21st, 2025

Hey there! In my latest blog post, I’m sharing a peek at my ultimate self-hosting setup, unraveling the myths about asynchrony and concurrency, and I’m kickstarting a DIY series on creating your own backup system, emphasizing the strategy before diving into scripts. It’s an exciting mix of tech strategies you won’t want to miss!

Here’s What Science Really Says About Getting Fit and Staying Healthy

Let’s be real—fitness advice online is a total mess. Everyone’s shouting conflicting tips, and it’s impossible to know what actually works. That’s why I loved this article that breaks down the real, science-backed stuff about exercise and health—no hype, no BS.

Here’s the gist:

  • Getting visible muscles and changes takes time. Like, months. Not days or weeks.
  • Forget “toning.” It’s just building muscle and losing fat—there’s no magic trick.
  • Cardio and strength training? You need both. Your heart and muscles will thank you.
  • If you’re super busy, HIIT workouts (short, brutal bursts) can actually get you pretty fit without eating your whole day.

The best part? You don’t have to be perfect or an athlete. Just move regularly and put in some effort. That’s really it. Your future self (and those jeans you want to fit better) will be so grateful.

Check it out here: https://bakadesuyo.com/2025/07/exercising/

Sandcastles, Spaghetti, and Software: Surviving the Chaos of AI-First Coding

What happens when software development gets upended by AI? According to Scott Werner, we’re all improvising—and that might be the only honest answer.


The Wild West of AI-Driven Development

Scott Werner’s essay, Nobody Knows How To Build With AI Yet, is a must-read snapshot of the modern AI development experience: exhilarating, uncertain, and utterly unpredictable. Werner invites us behind the scenes as he builds “Protocollie” in four days—not by mastering every detail, but by collaborating with an AI and improvising each step.

The most striking realization? There is no longer a set path or best practice for building with AI. What worked yesterday may already be obsolete. Documentation, if it exists, is more like a time capsule of one developer’s process than a blueprint for others.


Expertise: Out the Window

Werner challenges the myth of expertise in this era. In a world where AI tools, workflows, and even “the rules” change every few weeks, everyone—veterans and newcomers alike—is a beginner. “We’re all making this up as we go,” he writes, with a mix of humility and thrill.

His “four-document system,” cobbled together out of necessity rather than design, illustrates this point. The documents aren’t definitive; they’re just artifacts of one experiment, already nostalgic by the time they’re published. What matters isn’t following a set of instructions, but being willing to experiment, iterate, and leave markers for yourself (and maybe others) along the way.


Sandcastles at Low Tide

The essay’s strength lies in its metaphors: AI development is like jazz improvisation, sandcastle building, or throwing spaghetti at the wall. The sticking isn’t what matters—the act of throwing is. In this landscape, the most valuable skill isn’t syntax or architecture, but the ability to express clear intent, communicate with your AI “pair programmer,” and let go of any illusion of permanence.

Documentation? Less a how-to manual, more a set of “messages to future confused versions of ourselves.”


Why You Should Read This

If you’re a developer, technical lead, or just someone curious about how AI is changing work, Werner’s reflections will resonate. They offer a mix of hard-won wisdom, permission to experiment, and an honest look at the anxiety and exhilaration of the new normal.

Memorable Line:
“Maybe all methodology is just mutually agreed-upon fiction that happens to produce results.”


Want to Go Deeper?

  • What does it mean to be an “expert” when tools and workflows change weekly?
  • How can we create documentation that helps others without pretending to have all the answers?
  • What risks and rewards come with building “sandcastles at low tide”?

Read the original essay if you want to see what it really feels like on the front lines of AI-powered creation. And then? Make your own trail markers—someone else might find them, or you might just need them yourself next week.


Who should read this?
Developers, tech leaders, innovation managers, and anyone excited (or a bit terrified) by the rapid evolution of AI in software.

A Homelab Perspective on Backup Strategy

“Data must always be restorable (and as quickly as possible), in an open format, and consistent.”
— Stefano Marinelli

Why Your Homelab Needs More Than Just “Copies”

If you’re running a homelab—whether it’s for learning, hosting services, or managing family data—you’ve probably told yourself “I’ll back it up later” or “I’ve got my files on another disk, so I’m safe.” But after reading Stefano Marinelli’s “Make Your Own Backup System – Part 1: Strategy Before Scripts,” it’s clear that many of us (myself included!) have been lulled into a false sense of security by confusing “backups” with mere file copies.

Marinelli’s core message?
True backup starts with a plan, not with scripts, disks, or the latest cloud storage.

Key Takeaways for Homelabbers

  • Plan First, Script Later:
    Don’t just whip up a cron job to rsync your /home directory. Start by asking: What do you really need to protect? How much downtime can you live with if something breaks? Where should your most precious data actually live?
  • Full Disk vs. File Backups:
    Do you back up the entire drive (system and all), or just the irreplaceable stuff? Full disk images are great for quick, all-in-one restores—especially for VMs—but can eat up tons of space. File-level backups (using rsync, tar, etc.) give you granularity, but restoring a borked system is way harder unless you know exactly what you’re doing.
  • Snapshots Are Essential:
    Filesystems like ZFS and BTRFS aren’t just for big enterprise setups—they’re your friend! Snapshots freeze your data at a specific point, so you’re not backing up half-written databases or files mid-change. This is the difference between a backup that works and one that silently fails.
  • Push or Pull?
    Marinelli makes a strong case for the “pull” model: your backup server fetches data from your machines, not the other way around. This means your main server never has to open up ports or risk exposure, and you keep one central point for management and restores.
  • Own Your Data:
    The article strongly advocates for keeping backups out of the “big tech” cloud. For homelabbers, that resonates—part of the homelab spirit is self-reliance and not being beholden to someone else’s infrastructure or fine print.

What’s Missing (and What to Ask Next)

Marinelli doesn’t dive (yet) into the weeds of scripting, automation, or how much this might cost you in hardware and time. He’s laser-focused on strategy—which is honestly what most homelabbers skip in their rush to install the next shiny tool.

But if you’re like me, you might be wondering:

  • How do I automate snapshot-based backups in a way that’s easy to restore?
  • What’s the best way to test that my backups actually work—without nuking my main system?
  • Are there open-source tools that make “pull” backups easier for a home environment?
  • What’s the smartest way to mix local and “cloudy” (maybe self-hosted) offsite storage?

Should You Read the Original?

If you’re running a homelab—whether you’ve got a single Raspberry Pi or a rack of old enterprise gear—you owe it to yourself (and your data) to rethink how you do backups. Marinelli’s post is a reminder that strategy trumps technology. The how-tos are coming in his later posts, but even as a stand-alone, this first part is pure gold for anyone who wants to sleep better at night knowing their family photos, media libraries, or home services are safe.

Final Word

Don’t wait for disaster to figure out if your backups work. Start with a plan, learn the difference between copying and true backups, and—most of all—make sure you can restore what you care about, when it matters most.