Partnering with AI: How I Learned to Let Claude Code Handle the Busywork

In Sajal Sharma’s insightful guide Working Effectively with AI Coding Tools like Claude Code, she distills how AI assistants shift our role from writing every line to orchestrating systems. Inspired by her lessons, I dove in by asking Claude Code to scaffold a simple customer CRUD API—endpoints, models, tests, even CI workflows. Seconds later, I had files everywhere. Excited? Absolutely. Prepared for quality checks? Not yet.

That moment mirrored one of Sajal’s key points: AI clears the runway, but humans still pilot the plane. It can generate code at lightning speed, but someone has to ensure alignment with security policies, performance budgets, and team conventions. In short, AI is the horsepower; we’re the pilots.

The Real Magic Trick: Specs Before Code

Jumping into a prompt like “Build customer API” is like ordering a pizza without toppings—you might get bread and sauce, but no pepperoni. Taking a cue from Sajal’s spec-first approach, I always start by drafting a clear spec in /docs/specs/.

Here’s a slice of my customer-api-spec.md:

# Customer API Spec

- CRUD operations for Customer entity
- Fields: id, name, email, createdAt, updatedAt
- Input validation: email regex, name length
- 200 vs 404 vs 400 status codes
- Logging: structured JSON with requestId
- Rate limiting: 100 reqs/min per IP

Then I prompt: “Claude, scaffold the customer API based on customer-api-spec.md.” The result closely matches my intentions—no unwanted extra toppings.

Why You Must Play Code Quality Cop

Sajal warns that vague prompts often lead to shortcuts: any types, skipped tests, or generic error responses. I block 30 minutes every Friday for my “AI Code Audit” sprint. I scan new files for weak typings, missing edge-case tests, and logging inconsistencies. Then I ask Claude Code: “Please refactor duplicate helpers into a shared module and enforce our error-handling middleware.” It’s like giving your codebase a weekly checkup.

Double-Checking with a Second Brain

As Sajal recommends, no single LLM should have the final word. For thorny questions—say, whether to shard the Customer table—I generate a plan with Claude Code, then run it by GPT-4o. It’s like having two senior engineers politely debate over which approach scales best.

When both agree, I move forward. That extra validation step takes minutes, but Sajal shares how it’s saved her from invisible tech-debt traps more times than she can count.

From Boilerplate to Brainwork

With the busywork automated, I follow Sajal’s advice: I spend my time on strategy—mentoring teammates, aligning with product goals, and making key architectural decisions. For instance, when our data team needed a real-time import pipeline, Claude drafted ETL scripts in seconds, but only I knew our SLA: analytics data must surface within two minutes. So I guided the solution toward streaming events instead of batch jobs.

Your Turn: One Tiny Experiment

Inspired by Sajal’s guide, pick one experiment for your next sprint:

  • Draft a spec first. Create a one-page markdown with clear requirements.
  • Audit weekly. Reserve 30 minutes to review AI-generated code.
  • Seek a second opinion. Validate your plan with another LLM.

Share your spec or prompt in the comments—let’s build better workflows together!


AI coding tools aren’t a gimmick—they’re a paradigm shift. As Sajal concludes, our true value lies in asking the right questions, crafting clear specs, and safeguarding quality. Keep the horsepower running; stay firmly in the pilot’s seat.

What was your first “Whoa” moment with AI? Drop a comment—I’d love to hear!

AI Is Fueling a Fake Content Flood — Even People You Know Can Be Caught

In the past week, at least two people close to me unknowingly reshared fake content on Facebook. These aren’t people who fall for chain emails or post conspiracy theories—they’re thoughtful, curious, and fairly tech-savvy. But that’s the reality now: it’s getting harder to tell what’s real online, even for people who usually know better.

The reason? AI is making it fast, cheap, and easy to generate fake stories, headlines, graphics, and even entire videos. And bots are spreading it all before we even realize it.

Take a moment to watch this clip from Rachel Maddow on MSNBC:
https://www.msnbc.com/rachel-maddow/watch/maddow-debunks-weird-fake-news-a-i-slop-stories-about-her-and-msnbc-infect-social-media-243601477992

Whether or not you’re a Maddow fan is beside the point. This segment shows how AI-generated nonsense—fake news stories, bot-written posts, and junk links—are showing up in our feeds, using her name and likeness to push made-up narratives. These aren’t even deepfakes. They’re low-effort, high-impact content designed to manipulate, outrage, and spread like wildfire.

Why People Fall for It

Here’s the tricky part: fake content doesn’t look fake anymore. Logos are copied, images are AI-generated, and the writing sounds just believable enough. AI tools are trained to mimic real news formats, which means many of the visual cues we used to rely on—like headlines, layout, or even tone—can’t be trusted the same way.

Add to that how fast we all scroll, how emotionally charged most social feeds are, and how much trust we put in content shared by people we know… and you’ve got a recipe for misinformation.

What You Can Do

I’m still a believer in AI’s potential, but I’m also realistic about how it’s being used right now. If you’re on social media, you need to assume you’ll be exposed to fake content—because you already have been.

Here are a few habits that help:

  • Pause before you share. If something triggers a strong reaction, that’s a good time to stop and investigate.
  • Check the source. Is it a reputable outlet? Does the link go where it says it does?
  • Reverse image search. Tools like Google Lens can help identify whether a photo has been altered or recycled.
  • Cross-check. If no one else is reporting it, there’s probably a reason.

Fake content is cheap. Your attention—and trust—are not. Stay sharp out there.

If this post helps even one person slow down before clicking “share,” it was worth writing. Let’s keep each other honest.

Rewiring AI: Putting Humans Back in the Loop

I’ll admit it—I used to love the promise of “one-click magic” in my observability dashboard. Who doesn’t want the AI to just fix that pager alert for you at 2 AM? But after reading Stop Building AI Tools Backwards by Hazel Weakly, I’ve come around to a stark realization: those “auto” buttons are exactly what’s hollowing out our edge as practitioners.

Here’s the thing—I’m a firm believer that we learn by doing, not by watching. Cognitive science calls it retrieval practice: you solidify knowledge only when you actively pull it from your own brain. Yet most AI assistants swoop in, do the work, and leave you wondering what just happened. It’s like teaching someone to bake by baking the cake for them. Fun for a minute, but no one actually masters the recipe.

Instead, imagine an AI that behaves like an “absent-minded instructor”—one who nudges you through each step of your incident playbook without ever taking the wheel. Using the author’s EDGE framework, it would:

  1. Explain by surfacing missing steps (“Have you considered rolling back that deploy?”), not just offering “click to fix” tooltips.
  2. Demonstrate with a 15-second animation of how to compare time ranges in your monitoring UI—turning your rough query into the exact syntax you need.
  3. Guide by asking Socratic questions (“What trace IDs have you checked so far?”), ensuring you articulate your plan instead of mindlessly pressing “Continue.”
  4. Enhance by watching your actions and suggesting incremental shortcuts (“I noticed you always filter by five-minutes-pre-alert—shall I pin that view next time?”).

Every interaction becomes a micro-lesson, reinforcing your mental models rather than eroding them.

I’ve started riffing on this idea in my own workflow. When I review pull requests, I ask our AI bot not to rewrite the code for me, but to quiz me: “What edge cases might this new function miss?” If I can’t answer, it highlights relevant docs or tests. Suddenly, I’m more prepared for production bugs—and I actually remember my review process.

What really blew me away in Stop Building AI Tools Backwards was the emphasis on cumulative culture—the fact that real innovation happens when teams iterate together, standing on each other’s shoulders. By capturing each developer’s on-the-job recalls and refinements, AI tools can become living archives of tribal knowledge, not just glorified search bars.

Of course, building these “human-first” experiences takes more thought than slapping an “Auto Investigate” button on your UI. But the payoff is huge: your team retains critical reasoning skills, shares best practices organically, and feeds high-quality data back into the system for ever-smarter suggestions.

So next time you’re tempted to automate away a few clicks, ask yourself: am I strengthening my team’s muscle memory—or erasing it? If you want to see how to do AI tooling the right way, check out Stop Building AI Tools Backwards and let’s start rewiring our interfaces for collaboration and growth.

Read the full article here: Stop Building AI Tools Backwards.

Riding the AI Wave: Why Marketing Pros Must Pivot or Perish

I came across Maarten Albarda’s electrifying piece in the latest BoSacks newsletter, originally published on MediaPost: “AI Is Not The Future — It Is Here To Take Your Job” (https://www.mediapost.com/publications/article/407506/ai-is-not-the-future-it-is-here-to-take-your-jo.html?edition=139243). Eric Schmidt’s warning that AI could elbow aside programmers, mathematicians, and entire marketing teams in mere months isn’t sci-fi—it’s next quarter’s boardroom debate. Here’s why embracing AI now feels more like grabbing a lifeboat than steering into a storm.

From where I sit, the real magic (and madness) lies in AI’s leap from “helpful chatbot” to “autonomous strategist.” Imagine a system that doesn’t just draft your ad copy but plans the campaign, allocates budget, and optimizes in real time. That’s not some distant beta test—it’s happening. We’re talking productivity boosts economists haven’t even charted yet. And if you’re thinking, “Nah, that’s years away,” Schmidt’s blistering timeline—full automation of coding tasks within months, general intelligence in 3–5 years—is a gut-check you can’t ignore.

So, what do you do? First, audit your playbook. Map every repetitive task and ask: “Could an algorithm do this faster (and cheaper) than my intern?” Spoiler: the answer’s often “yes.” Next, retool your team for human-only superpowers—ethical oversight, pattern-breaking creativity, and relationship-building that no AI can fake. Finally, make AI fluency part of your culture. A five-minute daily demo, a lunchtime “what’s new” session, even AI peer groups—whatever it takes to demystify the tech and keep curiosity front and center.

Every revolution creates winners and losers. If you lean into AI as a teammate—albeit a supercharged one—you’ll surf this wave instead of wiping out. And trust me, that’s way more fun than reinventing the agency model on the fly while your competitors pull ahead.