In Sajal Sharma’s insightful guide Working Effectively with AI Coding Tools like Claude Code, she distills how AI assistants shift our role from writing every line to orchestrating systems. Inspired by her lessons, I dove in by asking Claude Code to scaffold a simple customer CRUD API—endpoints, models, tests, even CI workflows. Seconds later, I had files everywhere. Excited? Absolutely. Prepared for quality checks? Not yet.
That moment mirrored one of Sajal’s key points: AI clears the runway, but humans still pilot the plane. It can generate code at lightning speed, but someone has to ensure alignment with security policies, performance budgets, and team conventions. In short, AI is the horsepower; we’re the pilots.
The Real Magic Trick: Specs Before Code
Jumping into a prompt like “Build customer API” is like ordering a pizza without toppings—you might get bread and sauce, but no pepperoni. Taking a cue from Sajal’s spec-first approach, I always start by drafting a clear spec in /docs/specs/
.
Here’s a slice of my customer-api-spec.md
:
# Customer API Spec
- CRUD operations for Customer entity
- Fields: id, name, email, createdAt, updatedAt
- Input validation: email regex, name length
- 200 vs 404 vs 400 status codes
- Logging: structured JSON with requestId
- Rate limiting: 100 reqs/min per IP
Then I prompt: “Claude, scaffold the customer API based on customer-api-spec.md
.” The result closely matches my intentions—no unwanted extra toppings.
Why You Must Play Code Quality Cop
Sajal warns that vague prompts often lead to shortcuts: any
types, skipped tests, or generic error responses. I block 30 minutes every Friday for my “AI Code Audit” sprint. I scan new files for weak typings, missing edge-case tests, and logging inconsistencies. Then I ask Claude Code: “Please refactor duplicate helpers into a shared module and enforce our error-handling middleware.” It’s like giving your codebase a weekly checkup.
Double-Checking with a Second Brain
As Sajal recommends, no single LLM should have the final word. For thorny questions—say, whether to shard the Customer table—I generate a plan with Claude Code, then run it by GPT-4o. It’s like having two senior engineers politely debate over which approach scales best.
When both agree, I move forward. That extra validation step takes minutes, but Sajal shares how it’s saved her from invisible tech-debt traps more times than she can count.
From Boilerplate to Brainwork
With the busywork automated, I follow Sajal’s advice: I spend my time on strategy—mentoring teammates, aligning with product goals, and making key architectural decisions. For instance, when our data team needed a real-time import pipeline, Claude drafted ETL scripts in seconds, but only I knew our SLA: analytics data must surface within two minutes. So I guided the solution toward streaming events instead of batch jobs.
Your Turn: One Tiny Experiment
Inspired by Sajal’s guide, pick one experiment for your next sprint:
- Draft a spec first. Create a one-page markdown with clear requirements.
- Audit weekly. Reserve 30 minutes to review AI-generated code.
- Seek a second opinion. Validate your plan with another LLM.
Share your spec or prompt in the comments—let’s build better workflows together!
AI coding tools aren’t a gimmick—they’re a paradigm shift. As Sajal concludes, our true value lies in asking the right questions, crafting clear specs, and safeguarding quality. Keep the horsepower running; stay firmly in the pilot’s seat.
What was your first “Whoa” moment with AI? Drop a comment—I’d love to hear!