Coding with AI on a Budget — My Structured, Multi-Model Workflow

I’m a heavy AI user, but I don’t just open ten tabs and throw the same question at every model I can find. My workflow is deliberate, and every model has a defined role in the process.

It’s not that the “AI buffet” style is wrong — in fact, this excellent guide is a perfect example of that approach. But my own style is more like a relay race: each model runs its leg, then hands off to the next, so no one’s doing a job they’re bad at.


Phase 1 — Discovery & Requirements (ChatGPT as the Architect)

When I’m starting something new, I begin with long back-and-forth Q\&A sessions in ChatGPT.

  • The goal is to turn fuzzy ideas into clear, testable requirements.
  • I’ll ask “what if?” questions, explore trade-offs, and refine scope until I have a solid first draft of requirements and functional specs.

Why ChatGPT? Because it’s great at idea shaping and structured writing — and I can quickly iterate without burning expensive tokens.


Phase 2 — Critique & Refinement (Gemini as the Critic)

Once I have a draft, I hand it over to Gemini 2.5 Pro.

  • Gemini acts like a tough peer reviewer — it questions assumptions, spots gaps, and points out edge cases.
  • I take Gemini’s feedback back to ChatGPT for edits.
  • We repeat this loop until the document is solid enough to hand off to implementation.

This step makes the coding phase dramatically smoother — Claude Code gets a blueprint, not a napkin sketch.


Phase 3 — Implementation (Claude Code as the Builder)

With specs locked in, I move to Claude Code for the actual build.

  • I prep the context using a tool like AI Code Prep GUI to include only the relevant files.
  • Claude Code follows instructions well when the brief is crisp and the noise is low.
  • This is where the investment in phases 1 and 2 pays off — Claude isn’t guessing, it’s executing.

Phase 4 — Specialist Consultations (Free & Budget Models)

If something tricky comes up — a gnarly bug, architectural uncertainty — I call in a specialist.

  • For deep problem-solving: o3, GLM 4.5, Qwen3 Coder, Kimi K2, or DeepSeek R1.
  • For alternate perspectives: Poe.com’s Claude 4, OpenRouter’s o4-mini, or Perplexity for research.
  • The point is diagnosis, not doing the build work.

These models are often free (with daily credits or token grants) and help me avoid overusing paid API calls.


Why This Works for Me

  • Role clarity: Each model does the job it’s best at.
  • Lower costs: Expensive models are reserved for hard, high-value problems.
  • Better output: The spec→critique→build pipeline reduces rework.
  • Adaptability: I can swap out models as free-tier offers change.

Closing Thoughts

AI isn’t magic — you are.
The tools only work as well as the process you put them in. For me, that process is structured and deliberate. If you prefer a more exploratory, multi-tab style, check out this free AI coding guide for an alternate perspective. Both approaches can work — the important thing is to know why you’re using each model, and when to pass the baton.

Three Tiny Thoughts Worth Carrying Into the Week

Inspired by the Farnam Street newsletter, Brain Food – August 10, 2025

Every week, the Brain Food newsletter from Farnam Street drops a “Tiny Thoughts” section — short, sharp ideas that stick with you. This week’s three were especially good:

  1. Telling yourself you’ll do it tomorrow is how dreams die.
  2. The problem with success is that it teaches you the wrong lessons. What worked yesterday becomes religion, and religions don’t adapt.
  3. Good decision-making isn’t about being right all the time. It’s about lowering the cost of being wrong and changing your mind. When mistakes are cheap, you can move fast and adapt. Make mistakes cheap, not rare.

Procrastination kills momentum, success can calcify into dogma, and fear of mistakes can freeze progress. The common cure is movement — doing the thing today, questioning yesterday’s formulas, and lowering the stakes so you can act and adapt quickly. Make action a habit, flexibility a strength, and mistakes a tool for learning.


If you want more like this, you can subscribe to Farnam Street’s excellent Brain Food newsletter.

2025: The Year the Modern World Lost Its Mind

What our current tech-saturated moment has in common with the age of bicycles, Model Ts, and nervous breakdowns

In 1910, the world was gripped by the whir of engines, the shimmer of skyscrapers, and the idea that maybe—just maybe—modern life was moving too fast for the human mind to handle. In 2025, the scenery has changed—electric cars instead of Model Ts, AI chatbots instead of Kodak cameras—but the sensation? Uncannily familiar.

This piece was inspired by Derek Thompson’s excellent essay, “1910: The Year the Modern World Lost Its Mind”, which explores how the early 20th century’s technological vertigo mirrors our own moment. Reading it felt less like a history lesson and more like holding up a mirror to 2025.

This is the year the modern world, in its infinite wisdom, decided to sprint into the future with no map, no seatbelt, and a half-charged phone battery. We’re living at warp speed, and everyone’s nervously checking to see if anyone else feels queasy.


1. The Speed Problem

If the early 1900s had the bicycle craze, we’ve got the everything craze. The 2020s are a blur of quarterly product launches, algorithm updates, and overnight viral trends. Your phone isn’t just in your pocket—it’s in your bloodstream. News cycles collapse into hours; global events live and die in the span of a long lunch break.

Like in 1910, speed isn’t just mechanical—it’s existential. We aren’t moving faster to get somewhere. We’re moving faster so we don’t feel like we’re falling behind.


2. The Nervous Breakdown Economy

A century ago, doctors diagnosed “American Nervousness” among white-collar workers who couldn’t keep pace with the new tempo of life. Today, we’ve just swapped the sanatorium for Slack. “Burnout” is our word for it, but the symptoms are eerily similar: fatigue, anxiety, a sense of being perpetually “on call.”

Our workplaces run on real-time messages, constant notifications, and the lurking fear that AI might be both your assistant and your replacement. If in 1910 the railway clerk feared the telegraph machine, in 2025 the copywriter fears the autocomplete suggestion.


3. The Artistic Backlash

In 1910, Stravinsky, Kandinsky, and Picasso reached into the deep past to make sense of the machine age. In 2025, artists are doing the same—except the “past” might be analog film cameras, vinyl records, or hand-drawn zines. The hottest design aesthetic right now is “slightly broken,” as if imperfection itself is a protest against AI’s cold precision.

The more our tools can flawlessly mimic reality, the more we crave something they can’t—flaws, accidents, and human fingerprints.


4. Competing Theories of Human Nature

In the early 20th century, Max Weber thought modern work ethic was an extension of religious discipline. Freud thought it was a repression of primal urges. In 2025, we’re still arguing the same point—just swap “religion” for “productivity culture” and “primal urges” for “doomscrolling.”

Is the AI revolution the ultimate expression of human ingenuity or the ultimate suppression of it? Are we using these tools to expand our potential—or outsourcing so much of ourselves that we forget what we’re capable of?


Conclusion: The Loop We Can’t Escape

History isn’t a straight line—it’s a loop. In 1910, the world gasped at the pace of change, feared the toll it would take on the mind, and questioned whether our shiny new machines were serving us or hollowing us out. In 2025, we’re running the same circuit, just on faster, smarter, and more invisible tracks.

We tell ourselves that our anxieties are unique, that no one before has felt the strange cocktail of awe and dread that comes with watching the future arrive early. But the truth is, every generation has looked into the whirring heart of its own inventions and wondered if it built a better world—or just built a bigger cage.

The choice before us now isn’t whether technology will change us—it already has. The choice is whether we can meet that change with the same mix of creativity, resistance, and humanity that our predecessors brought to their own dizzying moment. If 1910 proved anything, it’s that even in the age of vertigo, we can still plant our feet—if we remember to look up from the blur and decide where we actually want to go.

Because if we don’t, “the year we lost our minds” won’t be a moment in history. It’ll be a permanent address.


Then & Now: Technology, Anxiety, and Culture

Category19102025
Breakthrough TechnologiesAutomobiles, airplanes, bicycles, skyscrapers, phonograph, Kodak cameraAI chatbots, EVs & self-driving cars, drones, mixed reality headsets, quantum computing
Pace of ChangeDecades of industrial innovations compressed into a few yearsContinuous, globalized tech updates delivered instantly
Cultural Anxiety“American Nervousness” (neurasthenia), fear of machines dehumanizing societyBurnout, “always-on” culture, fear of AI replacing human work
Moral PanicWomen on bicycles seen as socially and sexually disruptiveAI-generated art/writing seen as undermining human creativity
Artistic ReactionStravinsky’s The Rite of Spring, Kandinsky’s abstraction, Picasso’s primitivismAnalog revival (vinyl, film photography), glitch aesthetics, AI art critique
Intellectual DebateWeber: work ethic aligns with modern capitalism; Freud: modernity represses human instinctsProductivity culture vs. digital well-being; tech optimism vs. tech doom
Public SentimentAwe at progress, fear of losing humanityExcitement about AI’s potential, anxiety about its societal cost

What Small Publishers Can Learn from the Big Four’s AI-Defying Quarter

If you’ve been following the headlines, you might think AI is poised to hollow out the news business — stealing traffic, scraping archives, and churning out synthetic stories that compete with the real thing. And yet, four of America’s largest news organizations — Thomson Reuters, News Corp, People Inc (formerly Dotdash Meredith), and The New York Times — just turned in a combined \$5 billion in quarterly revenue and nearly \$1.2 billion in profit.

I first came across this coverage in the BoSacks newsletter, which linked to Press Gazette’s original report. The piece details how these companies aren’t just surviving in the AI era; they’re quietly reshaping their models to make it work for them. From AI-powered professional tools to content licensing deals with OpenAI, Amazon, and Meta, they’re finding ways to monetize their content and expand audience engagement — even as Google’s AI-driven search starts serving answers instead of links.

For smaller, niche publishers, the temptation is to shrug this off. “Sure, it’s easy when you have a billion-dollar brand and a legal department the size of my entire staff.” But there’s a lot here that is portable — if you focus on the right pieces.


Lesson 1: Own Your Audience Before AI Owns Your Traffic

One of the clearest takeaways from the big four is how much they’re investing in direct audience relationships. The New York Times hit 11.88 million subscribers, People Inc launched a dedicated app, and even News Corp’s Dow Jones division keeps climbing on digital subscriptions.

For small publishers, this means stop over-relying on algorithmic referrals. If you’re still counting on Facebook, Google, or Apple News as your main discovery channels, you’re building on borrowed land.

Action:

  • Launch a low-friction email newsletter that delivers high-value, niche-specific updates.
  • Incentivize sign-ups with premium extras — e-books, data sheets, or early access content.
  • Build community spaces (Discord, Slack, or forums) where your most engaged readers gather off-platform.

Lesson 2: Package Your Expertise as a Product, Not Just a Publication

Thomson Reuters isn’t just “doing news.” They’re integrating AI into products like CoCounsel, which bakes their proprietary legal and tax content into Microsoft 365 workflows. It’s sticky, high-margin, and hard for competitors to replicate.

Smaller publishers may not have the dev team to roll out enterprise-level AI tools, but the underlying idea applies: turn your content into something your audience uses, not just reads.

Action:

  • Convert your most-requested guides or reports into downloadable templates, toolkits, or training modules.
  • Create a searchable knowledge base for subscribers, updated with new insights monthly.
  • Partner with a lightweight AI platform to offer custom alerts or summaries in your niche.

Turn insights into income.

Don’t just read about what’s possible — start building it now. I’ve put together a free, printable 90-Day Growth Plan for Small Publishers with simple, actionable steps you can follow today to grow your audience and revenue.


Lesson 3: Monetize Your Archives and Protect Your IP

Both the NYT and News Corp are in legal battles over AI scraping, but they’re also cutting deals to license their content. The message is clear: your back catalog is an asset — treat it like one.

For small publishers, this could mean licensing niche datasets, syndicating evergreen content to allied outlets, or even creating curated “best of” packages for corporate training or education markets.

Action:

  • Audit your archive for evergreen, high-demand topics.
  • Explore licensing or syndication deals with industry associations, trade schools, or niche platforms.
  • Add clear terms of use and copyright notices to protect your content from unauthorized scraping.

Lesson 4: Diversify Revenue Beyond Ads

People Inc is replacing declining print dollars with more profitable digital and e-commerce revenue. The Times is making real money from games, cooking, and even video spin-offs of podcasts.

Smaller publishers don’t need a NYT-sized portfolio to diversify. You just need a second or third income stream that aligns with your audience’s interests.

Action:

  • Launch a paid resource library with niche-specific data, tools, or premium reports.
  • Run virtual events, webinars, or training sessions for a fee.
  • Sell targeted sponsorships or native content in newsletters instead of relying solely on display ads.

The Bottom Line

AI disruption is real — and it’s already changing how readers find and consume news. But the big players are showing that with strong brands, direct audience relationships, and smart product diversification, you can turn the threat into an advantage.

For smaller publishers, the scale is different but the playbook is the same:

  • Control your audience pipeline.
  • Turn your expertise into products.
  • Protect and monetize your archives.
  • Don’t bet your survival on one revenue stream.

It’s not about matching the NYT’s resources; it’s about matching their mindset. In the AI era, the publishers who think like product companies — and treat their audience like customers instead of traffic — will be the ones still standing when the algorithms shift again.

Memorable takeaway: In the AI age, resilience isn’t about the size of your newsroom — it’s about the strength of your audience ties and the creativity of your monetization.

Ready to grow? Grab the free, printable 90-Day Growth Plan for Small Publishers and start building your audience and revenue today.

Daily Links: Sunday, Aug 10th, 2025

In my latest blog post, I dive into how Claude Code has become my go-to tool while experimenting with LLM programming agents. Despite a few hiccups, it has enabled me to create around 12 projects swiftly. If you’re curious about how it’s enhancing my programming workflow or about my offline AI workspace setup, check it out!

Daily Links: Thursday, Aug 7th, 2025

In my latest blog post, I dive into three intriguing topics! First, I explore why AI isn’t turning us into 10x engineers, despite the hype. Then, I share my journey of ditching Google’s search for the more user-friendly Kagi. Finally, I introduce KittenTTS, a cutting-edge text-to-speech model that’s impressively compact. Join me as we unravel these fascinating subjects!

Daily Links: Tuesday, Aug 5th, 2025

In my latest blog post, I share insights on two fascinating articles. Dive into “The Making of D.” to unravel the intriguing backstory, and explore “The Dollar is Dead” to learn about the factors leading to the potential decline of the U.S. dollar. Whether you’re into deep dives or financial foresight, I’ve covered something for you!

Why Most Design Docs Fail—And How to Write One That Won’t

If you’ve ever slogged through pages of technical jargon and bullet points nobody will remember, you know why most design docs flop. They aren’t just boring—they’re useless. Not because the writers can’t write, but because they’re thinking about the wrong things.

This guide is for engineers, tech leads, and architects who want their design docs to move the project forward—not just check a process box.

A note on inspiration:
This approach owes a debt to Grant Slatton’s “How to Write a Good Design Document”, which nails the fundamentals of design doc clarity. The advice below builds on his framework, shaped by mistakes and lessons from my own experience in the trenches.


Why Design Docs Go Nowhere

After years of watching projects stumble or stall despite “approved” design docs, these are the five failure patterns I see again and again:

  1. Compliance over conviction: Docs get written because the process says so, not because anyone is fighting for the best solution.
  2. Ignoring the audience: Authors assume everyone’s already on board, equally informed, and cares as much as they do.
  3. Dodging hard questions: Tough trade-offs and risks are buried under jargon or skipped entirely.
  4. Cleverness over clarity: Writers want to look smart, not make the real issues obvious.
  5. First-draft laziness: The first version gets shipped; nobody cuts, nobody questions what’s essential.

These aren’t writing problems. They’re thinking problems. So here’s how to think differently about writing your next design doc—starting with making the stakes real.


The Design Doc Gut-Check Checklist

1. Make the Stakes Concrete

  • [ ] Quantify consequences: Spell out what happens if you do nothing—lost revenue, increased tech debt, user churn, missed deadlines. Put real numbers or examples on the pain.
  • [ ] Draw boundaries: Say explicitly what’s out of scope or not being solved.

2. Force Yourself to Surface Risks and Trade-offs

  • [ ] List two alternatives and why you’re not choosing them: If you can’t, you’re not thinking hard enough.
  • [ ] Call out unknowns and riskiest assumptions: Write down what could break, what you don’t know, and what would cause this plan to fail. Make your discomfort visible—don’t let someone else expose it in review.

3. Preempt Objections—Before Anyone Else Can

  • [ ] Write down the three hardest questions you expect in review, and answer them in the doc: Don’t wait for someone to grill you in the meeting.
  • [ ] Assume every reviewer is skeptical: If you were in a rival team’s shoes, would you buy this argument?

4. Ruthlessly Cut and Clarify

  • [ ] Trim 30% from your first draft: If you can’t, ask yourself which section you’d defend in a five-minute elevator pitch—and cut the rest.
  • [ ] One idea per paragraph, one sentence summary: If a paragraph can’t be compressed to a single, clear sentence, rewrite it.
  • [ ] Put calculations, benchmarks, and technical specs in an appendix: Keep your main argument uncluttered and easy to follow.

5. Finish with Commitment and Clarity

  • [ ] Be explicit about next steps, owners, and triggers for a redesign: Don’t end on a shrug—define accountability.
  • [ ] Define success and failure with a metric, timeline, or scenario: No hedging.
  • [ ] Test it on someone who wasn’t in the planning meetings: If they don’t get it, neither will your reviewers.

From Experience

I’ve watched launches stall for weeks because a design doc assumed everyone would “just know” about a scaling bottleneck. Nobody called it out directly, so when things failed in QA, everyone acted surprised. If it’s not written down, it doesn’t exist.

On the other hand, the strongest docs I’ve seen say: “Here’s what we know, here’s what we don’t, and here’s how we’ll handle it if we’re wrong.” They made people nervous in review—but those nerves forced the right conversations and saved months down the line.


The Real Test

Write the doc you’d want to inherit from someone else—when the deadlines are real, the system is groaning, and all the easy assumptions have disappeared. If your design doc doesn’t make you a little uncomfortable to write, it won’t be compelling to read.


Major credit to Grant Slatton’s original article, which covers the mechanics with clarity and precision. This checklist aims to push those fundamentals further, turning them into habits you’ll actually use when it matters.

How Smaller Companies Can Start Using AI—Real Lessons Inspired by Shopify

When Shopify’s CEO Tobi Lütke encouraged everyone at his company to make AI tools a natural part of their work, it wasn’t a sudden shift. It came after years of careful preparation—building the right culture, legal framework, and infrastructure. While smaller companies don’t have Shopify’s scale or budget, they can still learn a lot from how Shopify approached AI adoption and adapt it to their own realities.

Start by assembling a cross-functional pilot team of five to seven people—a sales rep, someone from customer support, perhaps an engineer or operations lead. Give this group a modest budget, around $5,000, and 30 days to demonstrate whether AI can help solve real problems. Set clear goals upfront: maybe cut the time it takes to respond to customer emails by 20%, automate parts of sales prospect research to save two hours a week, or reduce repetitive manual data entry in operations by 30%. This focus helps avoid chasing shiny tools without a real payoff.

You don’t need to build your own AI platform or hire data scientists to get started. Many cloud AI services today offer pay-as-you-go pricing, so you can experiment without huge upfront investments. For example, a small customer support team might subscribe to ChatGPT for a few hundred dollars a month and connect it to their helpdesk software to draft faster, more personalized email replies. A sales team could create simple automations with no-code tools like Zapier that pull prospect data from LinkedIn, run it through an AI to generate email drafts, and send them automatically. These kinds of workflows often take less than a week to set up and can improve efficiency by 30% or more.

As you experiment, keep a close eye on costs. API calls add up quickly, and a small team making thousands of requests each month might see unexpected bills over $1,000 if you’re not careful. Make sure to monitor usage and set sensible limits during your pilot.

Using AI responsibly means setting some basic ground rules early. Include someone from legal or compliance in your pilot to help create simple guidelines. For instance, never feed sensitive or personally identifiable customer information into AI tools unless it’s properly masked or anonymized. Also, require human review of AI-generated responses before sending them out, at least during your early adoption phase. This “human-in-the-loop” approach catches errors and builds trust.

Training people to use AI effectively is just as important as the tools themselves. Instead of long, formal classes, offer hands-on workshops where your teams can try AI tools on their real daily tasks. Encourage everyone to share what worked and what didn’t, and identify “AI champions” who can help their teammates navigate challenges. When managers and leaders openly use AI themselves and discuss its benefits and limitations, it sets a powerful example that using AI is part of how work happens now.

Tracking results doesn’t require fancy analytics. A simple Google Sheet updated weekly can track how many AI requests team members make, estimate time saved on tasks, and note changes in customer satisfaction. If the pilot isn’t delivering on its goals after 30 days, pause and rethink before expanding.

Keep in mind common pitfalls along the way. Don’t rush to automate complex workflows without testing—early AI outputs can be inaccurate or biased. Don’t assume AI will replace deep expertise; it’s a tool to augment human judgment, not a substitute. And don’t overlook data privacy—sending customer information to third-party AI providers without proper agreements can lead to compliance headaches.

Shopify’s success came from building trust with their legal teams, investing in infrastructure that made AI accessible, and carefully measuring how AI use related to better work outcomes. While smaller companies might not create internal AI proxies or sophisticated dashboards, they can still embrace that spirit: enable access, encourage experimentation, and measure what matters.

By starting with a focused pilot, using affordable tools, setting simple but clear usage rules, training through hands-on practice, and watching the results carefully, your company can unlock AI’s potential without unnecessary risk or wasted effort.