I’ll admit it—I used to love the promise of “one-click magic” in my observability dashboard. Who doesn’t want the AI to just fix that pager alert for you at 2 AM? But after reading Stop Building AI Tools Backwards by Hazel Weakly, I’ve come around to a stark realization: those “auto” buttons are exactly what’s hollowing out our edge as practitioners.
Here’s the thing—I’m a firm believer that we learn by doing, not by watching. Cognitive science calls it retrieval practice: you solidify knowledge only when you actively pull it from your own brain. Yet most AI assistants swoop in, do the work, and leave you wondering what just happened. It’s like teaching someone to bake by baking the cake for them. Fun for a minute, but no one actually masters the recipe.
Instead, imagine an AI that behaves like an “absent-minded instructor”—one who nudges you through each step of your incident playbook without ever taking the wheel. Using the author’s EDGE framework, it would:
- Explain by surfacing missing steps (“Have you considered rolling back that deploy?”), not just offering “click to fix” tooltips.
- Demonstrate with a 15-second animation of how to compare time ranges in your monitoring UI—turning your rough query into the exact syntax you need.
- Guide by asking Socratic questions (“What trace IDs have you checked so far?”), ensuring you articulate your plan instead of mindlessly pressing “Continue.”
- Enhance by watching your actions and suggesting incremental shortcuts (“I noticed you always filter by five-minutes-pre-alert—shall I pin that view next time?”).
Every interaction becomes a micro-lesson, reinforcing your mental models rather than eroding them.
I’ve started riffing on this idea in my own workflow. When I review pull requests, I ask our AI bot not to rewrite the code for me, but to quiz me: “What edge cases might this new function miss?” If I can’t answer, it highlights relevant docs or tests. Suddenly, I’m more prepared for production bugs—and I actually remember my review process.
What really blew me away in Stop Building AI Tools Backwards was the emphasis on cumulative culture—the fact that real innovation happens when teams iterate together, standing on each other’s shoulders. By capturing each developer’s on-the-job recalls and refinements, AI tools can become living archives of tribal knowledge, not just glorified search bars.
Of course, building these “human-first” experiences takes more thought than slapping an “Auto Investigate” button on your UI. But the payoff is huge: your team retains critical reasoning skills, shares best practices organically, and feeds high-quality data back into the system for ever-smarter suggestions.
So next time you’re tempted to automate away a few clicks, ask yourself: am I strengthening my team’s muscle memory—or erasing it? If you want to see how to do AI tooling the right way, check out Stop Building AI Tools Backwards and let’s start rewiring our interfaces for collaboration and growth.
Read the full article here: Stop Building AI Tools Backwards.