A Personal Take on Driving AI Adoption (and Why Mindset Matters More Than Tech)

I recently discovered Yue Zhao’s insightful article, “What Most Leaders of AI Get Wrong About Driving Adoption” and was reminded how often the human side of change gets overlooked. As an AI advocate, I’ve seen even the most promising initiatives stall—not because the technology failed, but because people weren’t ready.

Why Technical Focus Alone Isn’t Enough
It’s tempting to believe that once teams learn the latest AI tools, adoption will naturally follow. Yet time and again, projects falter not for lack of skill but because fear and uncertainty go unaddressed. When people feel anxious about what AI means for their roles, they hesitate to experiment or speak up—even when the technology could help them thrive.

Three Simple Shifts with Big Impact
Yue outlines a change-management approach that puts people first. Here’s how I’m applying it:

  1. Acknowledge and Address Fear. Instead of glossing over concerns, create dedicated forums—like quick “AI myth-busting” discussions—where everyone can voice questions and get clear answers. It demystifies the technology and validates genuine worries.
  2. Share Your Thinking. Transparency builds trust. I maintain a lightweight “AI decision diary” that outlines which tools we’re evaluating, why, and what trade-offs matter. This openness invites feedback and keeps everyone aligned.
  3. Build Together. Co-creation beats top-down edicts every time. Host hands-on sprints with diverse team members to prototype AI-enabled workflows. Even a short, focused session can spark ideas that stick—and foster ownership.

Real-World Reflections
After running these inclusive sessions with various teams, I’ve seen a noticeable shift: participants move from skepticism to genuine curiosity. The simple act of co-designing experiences turns apprehension into enthusiasm.

Why This Matters for You
True AI adoption isn’t about deploying the flashiest model; it’s about empathy and collaboration. When you weave in conversations about fear, share your rationale openly, and invite people into the process early, you transform AI from a mandate into a shared opportunity.

Your Turn
What’s the biggest roadblock you’ve faced when introducing AI? Reply with your experiences, and let’s explore solutions together.

Rewiring AI: Putting Humans Back in the Loop

I’ll admit it—I used to love the promise of “one-click magic” in my observability dashboard. Who doesn’t want the AI to just fix that pager alert for you at 2 AM? But after reading Stop Building AI Tools Backwards by Hazel Weakly, I’ve come around to a stark realization: those “auto” buttons are exactly what’s hollowing out our edge as practitioners.

Here’s the thing—I’m a firm believer that we learn by doing, not by watching. Cognitive science calls it retrieval practice: you solidify knowledge only when you actively pull it from your own brain. Yet most AI assistants swoop in, do the work, and leave you wondering what just happened. It’s like teaching someone to bake by baking the cake for them. Fun for a minute, but no one actually masters the recipe.

Instead, imagine an AI that behaves like an “absent-minded instructor”—one who nudges you through each step of your incident playbook without ever taking the wheel. Using the author’s EDGE framework, it would:

  1. Explain by surfacing missing steps (“Have you considered rolling back that deploy?”), not just offering “click to fix” tooltips.
  2. Demonstrate with a 15-second animation of how to compare time ranges in your monitoring UI—turning your rough query into the exact syntax you need.
  3. Guide by asking Socratic questions (“What trace IDs have you checked so far?”), ensuring you articulate your plan instead of mindlessly pressing “Continue.”
  4. Enhance by watching your actions and suggesting incremental shortcuts (“I noticed you always filter by five-minutes-pre-alert—shall I pin that view next time?”).

Every interaction becomes a micro-lesson, reinforcing your mental models rather than eroding them.

I’ve started riffing on this idea in my own workflow. When I review pull requests, I ask our AI bot not to rewrite the code for me, but to quiz me: “What edge cases might this new function miss?” If I can’t answer, it highlights relevant docs or tests. Suddenly, I’m more prepared for production bugs—and I actually remember my review process.

What really blew me away in Stop Building AI Tools Backwards was the emphasis on cumulative culture—the fact that real innovation happens when teams iterate together, standing on each other’s shoulders. By capturing each developer’s on-the-job recalls and refinements, AI tools can become living archives of tribal knowledge, not just glorified search bars.

Of course, building these “human-first” experiences takes more thought than slapping an “Auto Investigate” button on your UI. But the payoff is huge: your team retains critical reasoning skills, shares best practices organically, and feeds high-quality data back into the system for ever-smarter suggestions.

So next time you’re tempted to automate away a few clicks, ask yourself: am I strengthening my team’s muscle memory—or erasing it? If you want to see how to do AI tooling the right way, check out Stop Building AI Tools Backwards and let’s start rewiring our interfaces for collaboration and growth.

Read the full article here: Stop Building AI Tools Backwards.