Daily Links: Tuesday, Aug 12th, 2025

In my latest blog post, I dive into the architectural advancements from GPT-2 to gpt-oss and compare them with Qwen3. I also spotlight “bolt,” a high-performance, real-time optimized, statically typed embedded language written in C. It’s an exciting exploration into cutting-edge AI and coding technologies that I think you’ll enjoy!

MCP’s Simplicity is a Feature… Until It’s a Disaster

Back in college, I worked on a project where our front-end app talked to the back end through XML/RPC.
It wasn’t glamorous. XML was verbose, the tooling was clunky by today’s standards, and debugging often meant wading through nested tags that looked like an ancient library card catalog.
But here’s the thing: we had well-defined calls, structured data, and clear documentation.
Every function was described. Every parameter had a type. If you tried to pass an integer where a string belonged, the compiler—or the generated stubs—caught it. That wasn’t just nice for development; it meant our API could be reused, secured, and maintained without someone having to reverse-engineer it later.

Fast-forward to today, and we have the Model Context Protocol (MCP), being pitched as the “USB-C for AI tools.” In theory, it’s a universal connector between AI agents and the APIs or services they need. In practice? Julien Simon’s recent piece, Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises, makes a pretty convincing case that MCP is ignoring everything XML/RPC, CORBA, SOAP, gRPC, and others already taught us.

Julien lays it out bluntly:

  • Type safety? MCP uses schemaless JSON, with optional hints nobody enforces.
  • Cross-language consistency? Each implementation is on its own—Python’s JSON isn’t JavaScript’s JSON, and good luck with float precision.
  • Security? OAuth arrived years too late, and even now only for HTTP.
  • Observability? Forget distributed tracing—you’re back to grepping logs like it’s 1999.
  • Cost tracking? None. You’ll just get a big bill and a mystery as to why.

This isn’t just an “engineers grumbling about elegance” problem. It’s a real-world operational risk. Enterprises adopting MCP today are baking in fragility: AI services making millions of calls without retries, without version control, without guarantees about what data comes back. Julien calls it the “patchwork protocol” problem—critical features aren’t in MCP itself, but scattered across third-party extensions. That’s how you end up with multiple teams using slightly different auth libraries that don’t interoperate, each needing its own audit.

If anything, the simplicity of MCP right now is exactly what makes it dangerous. It’s fast to integrate—just JSON over a transport—but that same minimalism hides the fact that the tooling isn’t ready for high-stakes production. In the AI gold rush, “move fast and break things” isn’t just a motto; it’s a business plan. But when what breaks is a healthcare AI’s medication dosing recommendation or a bank’s trading logic, the stakes are far higher than a crashed demo.

From my own XML/RPC days, I can say this: structure, enforced contracts, and predictable behavior might feel like overhead when you’re building a prototype. But in production? They’re the guardrails that keep you from careening off a cliff at 70 miles an hour.

MCP doesn’t need to turn into CORBA’s kitchen-sink complexity, but it does need to grow up—fast. Schema versioning, built-in tracing, strong type validation, standardized error handling, and cost attribution should be table stakes, not wishlist items. Otherwise, we’re just re-learning the same painful lessons our predecessors solved decades ago.


Read Julien Simon’s full article here: Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises

Coding with AI on a Budget — My Structured, Multi-Model Workflow

I’m a heavy AI user, but I don’t just open ten tabs and throw the same question at every model I can find. My workflow is deliberate, and every model has a defined role in the process.

It’s not that the “AI buffet” style is wrong — in fact, this excellent guide is a perfect example of that approach. But my own style is more like a relay race: each model runs its leg, then hands off to the next, so no one’s doing a job they’re bad at.


Phase 1 — Discovery & Requirements (ChatGPT as the Architect)

When I’m starting something new, I begin with long back-and-forth Q\&A sessions in ChatGPT.

  • The goal is to turn fuzzy ideas into clear, testable requirements.
  • I’ll ask “what if?” questions, explore trade-offs, and refine scope until I have a solid first draft of requirements and functional specs.

Why ChatGPT? Because it’s great at idea shaping and structured writing — and I can quickly iterate without burning expensive tokens.


Phase 2 — Critique & Refinement (Gemini as the Critic)

Once I have a draft, I hand it over to Gemini 2.5 Pro.

  • Gemini acts like a tough peer reviewer — it questions assumptions, spots gaps, and points out edge cases.
  • I take Gemini’s feedback back to ChatGPT for edits.
  • We repeat this loop until the document is solid enough to hand off to implementation.

This step makes the coding phase dramatically smoother — Claude Code gets a blueprint, not a napkin sketch.


Phase 3 — Implementation (Claude Code as the Builder)

With specs locked in, I move to Claude Code for the actual build.

  • I prep the context using a tool like AI Code Prep GUI to include only the relevant files.
  • Claude Code follows instructions well when the brief is crisp and the noise is low.
  • This is where the investment in phases 1 and 2 pays off — Claude isn’t guessing, it’s executing.

Phase 4 — Specialist Consultations (Free & Budget Models)

If something tricky comes up — a gnarly bug, architectural uncertainty — I call in a specialist.

  • For deep problem-solving: o3, GLM 4.5, Qwen3 Coder, Kimi K2, or DeepSeek R1.
  • For alternate perspectives: Poe.com’s Claude 4, OpenRouter’s o4-mini, or Perplexity for research.
  • The point is diagnosis, not doing the build work.

These models are often free (with daily credits or token grants) and help me avoid overusing paid API calls.


Why This Works for Me

  • Role clarity: Each model does the job it’s best at.
  • Lower costs: Expensive models are reserved for hard, high-value problems.
  • Better output: The spec→critique→build pipeline reduces rework.
  • Adaptability: I can swap out models as free-tier offers change.

Closing Thoughts

AI isn’t magic — you are.
The tools only work as well as the process you put them in. For me, that process is structured and deliberate. If you prefer a more exploratory, multi-tab style, check out this free AI coding guide for an alternate perspective. Both approaches can work — the important thing is to know why you’re using each model, and when to pass the baton.

Three Tiny Thoughts Worth Carrying Into the Week

Inspired by the Farnam Street newsletter, Brain Food – August 10, 2025

Every week, the Brain Food newsletter from Farnam Street drops a “Tiny Thoughts” section — short, sharp ideas that stick with you. This week’s three were especially good:

  1. Telling yourself you’ll do it tomorrow is how dreams die.
  2. The problem with success is that it teaches you the wrong lessons. What worked yesterday becomes religion, and religions don’t adapt.
  3. Good decision-making isn’t about being right all the time. It’s about lowering the cost of being wrong and changing your mind. When mistakes are cheap, you can move fast and adapt. Make mistakes cheap, not rare.

Procrastination kills momentum, success can calcify into dogma, and fear of mistakes can freeze progress. The common cure is movement — doing the thing today, questioning yesterday’s formulas, and lowering the stakes so you can act and adapt quickly. Make action a habit, flexibility a strength, and mistakes a tool for learning.


If you want more like this, you can subscribe to Farnam Street’s excellent Brain Food newsletter.

2025: The Year the Modern World Lost Its Mind

What our current tech-saturated moment has in common with the age of bicycles, Model Ts, and nervous breakdowns

In 1910, the world was gripped by the whir of engines, the shimmer of skyscrapers, and the idea that maybe—just maybe—modern life was moving too fast for the human mind to handle. In 2025, the scenery has changed—electric cars instead of Model Ts, AI chatbots instead of Kodak cameras—but the sensation? Uncannily familiar.

This piece was inspired by Derek Thompson’s excellent essay, “1910: The Year the Modern World Lost Its Mind”, which explores how the early 20th century’s technological vertigo mirrors our own moment. Reading it felt less like a history lesson and more like holding up a mirror to 2025.

This is the year the modern world, in its infinite wisdom, decided to sprint into the future with no map, no seatbelt, and a half-charged phone battery. We’re living at warp speed, and everyone’s nervously checking to see if anyone else feels queasy.


1. The Speed Problem

If the early 1900s had the bicycle craze, we’ve got the everything craze. The 2020s are a blur of quarterly product launches, algorithm updates, and overnight viral trends. Your phone isn’t just in your pocket—it’s in your bloodstream. News cycles collapse into hours; global events live and die in the span of a long lunch break.

Like in 1910, speed isn’t just mechanical—it’s existential. We aren’t moving faster to get somewhere. We’re moving faster so we don’t feel like we’re falling behind.


2. The Nervous Breakdown Economy

A century ago, doctors diagnosed “American Nervousness” among white-collar workers who couldn’t keep pace with the new tempo of life. Today, we’ve just swapped the sanatorium for Slack. “Burnout” is our word for it, but the symptoms are eerily similar: fatigue, anxiety, a sense of being perpetually “on call.”

Our workplaces run on real-time messages, constant notifications, and the lurking fear that AI might be both your assistant and your replacement. If in 1910 the railway clerk feared the telegraph machine, in 2025 the copywriter fears the autocomplete suggestion.


3. The Artistic Backlash

In 1910, Stravinsky, Kandinsky, and Picasso reached into the deep past to make sense of the machine age. In 2025, artists are doing the same—except the “past” might be analog film cameras, vinyl records, or hand-drawn zines. The hottest design aesthetic right now is “slightly broken,” as if imperfection itself is a protest against AI’s cold precision.

The more our tools can flawlessly mimic reality, the more we crave something they can’t—flaws, accidents, and human fingerprints.


4. Competing Theories of Human Nature

In the early 20th century, Max Weber thought modern work ethic was an extension of religious discipline. Freud thought it was a repression of primal urges. In 2025, we’re still arguing the same point—just swap “religion” for “productivity culture” and “primal urges” for “doomscrolling.”

Is the AI revolution the ultimate expression of human ingenuity or the ultimate suppression of it? Are we using these tools to expand our potential—or outsourcing so much of ourselves that we forget what we’re capable of?


Conclusion: The Loop We Can’t Escape

History isn’t a straight line—it’s a loop. In 1910, the world gasped at the pace of change, feared the toll it would take on the mind, and questioned whether our shiny new machines were serving us or hollowing us out. In 2025, we’re running the same circuit, just on faster, smarter, and more invisible tracks.

We tell ourselves that our anxieties are unique, that no one before has felt the strange cocktail of awe and dread that comes with watching the future arrive early. But the truth is, every generation has looked into the whirring heart of its own inventions and wondered if it built a better world—or just built a bigger cage.

The choice before us now isn’t whether technology will change us—it already has. The choice is whether we can meet that change with the same mix of creativity, resistance, and humanity that our predecessors brought to their own dizzying moment. If 1910 proved anything, it’s that even in the age of vertigo, we can still plant our feet—if we remember to look up from the blur and decide where we actually want to go.

Because if we don’t, “the year we lost our minds” won’t be a moment in history. It’ll be a permanent address.


Then & Now: Technology, Anxiety, and Culture

Category19102025
Breakthrough TechnologiesAutomobiles, airplanes, bicycles, skyscrapers, phonograph, Kodak cameraAI chatbots, EVs & self-driving cars, drones, mixed reality headsets, quantum computing
Pace of ChangeDecades of industrial innovations compressed into a few yearsContinuous, globalized tech updates delivered instantly
Cultural Anxiety“American Nervousness” (neurasthenia), fear of machines dehumanizing societyBurnout, “always-on” culture, fear of AI replacing human work
Moral PanicWomen on bicycles seen as socially and sexually disruptiveAI-generated art/writing seen as undermining human creativity
Artistic ReactionStravinsky’s The Rite of Spring, Kandinsky’s abstraction, Picasso’s primitivismAnalog revival (vinyl, film photography), glitch aesthetics, AI art critique
Intellectual DebateWeber: work ethic aligns with modern capitalism; Freud: modernity represses human instinctsProductivity culture vs. digital well-being; tech optimism vs. tech doom
Public SentimentAwe at progress, fear of losing humanityExcitement about AI’s potential, anxiety about its societal cost

What Small Publishers Can Learn from the Big Four’s AI-Defying Quarter

If you’ve been following the headlines, you might think AI is poised to hollow out the news business — stealing traffic, scraping archives, and churning out synthetic stories that compete with the real thing. And yet, four of America’s largest news organizations — Thomson Reuters, News Corp, People Inc (formerly Dotdash Meredith), and The New York Times — just turned in a combined \$5 billion in quarterly revenue and nearly \$1.2 billion in profit.

I first came across this coverage in the BoSacks newsletter, which linked to Press Gazette’s original report. The piece details how these companies aren’t just surviving in the AI era; they’re quietly reshaping their models to make it work for them. From AI-powered professional tools to content licensing deals with OpenAI, Amazon, and Meta, they’re finding ways to monetize their content and expand audience engagement — even as Google’s AI-driven search starts serving answers instead of links.

For smaller, niche publishers, the temptation is to shrug this off. “Sure, it’s easy when you have a billion-dollar brand and a legal department the size of my entire staff.” But there’s a lot here that is portable — if you focus on the right pieces.


Lesson 1: Own Your Audience Before AI Owns Your Traffic

One of the clearest takeaways from the big four is how much they’re investing in direct audience relationships. The New York Times hit 11.88 million subscribers, People Inc launched a dedicated app, and even News Corp’s Dow Jones division keeps climbing on digital subscriptions.

For small publishers, this means stop over-relying on algorithmic referrals. If you’re still counting on Facebook, Google, or Apple News as your main discovery channels, you’re building on borrowed land.

Action:

  • Launch a low-friction email newsletter that delivers high-value, niche-specific updates.
  • Incentivize sign-ups with premium extras — e-books, data sheets, or early access content.
  • Build community spaces (Discord, Slack, or forums) where your most engaged readers gather off-platform.

Lesson 2: Package Your Expertise as a Product, Not Just a Publication

Thomson Reuters isn’t just “doing news.” They’re integrating AI into products like CoCounsel, which bakes their proprietary legal and tax content into Microsoft 365 workflows. It’s sticky, high-margin, and hard for competitors to replicate.

Smaller publishers may not have the dev team to roll out enterprise-level AI tools, but the underlying idea applies: turn your content into something your audience uses, not just reads.

Action:

  • Convert your most-requested guides or reports into downloadable templates, toolkits, or training modules.
  • Create a searchable knowledge base for subscribers, updated with new insights monthly.
  • Partner with a lightweight AI platform to offer custom alerts or summaries in your niche.

Turn insights into income.

Don’t just read about what’s possible — start building it now. I’ve put together a free, printable 90-Day Growth Plan for Small Publishers with simple, actionable steps you can follow today to grow your audience and revenue.


Lesson 3: Monetize Your Archives and Protect Your IP

Both the NYT and News Corp are in legal battles over AI scraping, but they’re also cutting deals to license their content. The message is clear: your back catalog is an asset — treat it like one.

For small publishers, this could mean licensing niche datasets, syndicating evergreen content to allied outlets, or even creating curated “best of” packages for corporate training or education markets.

Action:

  • Audit your archive for evergreen, high-demand topics.
  • Explore licensing or syndication deals with industry associations, trade schools, or niche platforms.
  • Add clear terms of use and copyright notices to protect your content from unauthorized scraping.

Lesson 4: Diversify Revenue Beyond Ads

People Inc is replacing declining print dollars with more profitable digital and e-commerce revenue. The Times is making real money from games, cooking, and even video spin-offs of podcasts.

Smaller publishers don’t need a NYT-sized portfolio to diversify. You just need a second or third income stream that aligns with your audience’s interests.

Action:

  • Launch a paid resource library with niche-specific data, tools, or premium reports.
  • Run virtual events, webinars, or training sessions for a fee.
  • Sell targeted sponsorships or native content in newsletters instead of relying solely on display ads.

The Bottom Line

AI disruption is real — and it’s already changing how readers find and consume news. But the big players are showing that with strong brands, direct audience relationships, and smart product diversification, you can turn the threat into an advantage.

For smaller publishers, the scale is different but the playbook is the same:

  • Control your audience pipeline.
  • Turn your expertise into products.
  • Protect and monetize your archives.
  • Don’t bet your survival on one revenue stream.

It’s not about matching the NYT’s resources; it’s about matching their mindset. In the AI era, the publishers who think like product companies — and treat their audience like customers instead of traffic — will be the ones still standing when the algorithms shift again.

Memorable takeaway: In the AI age, resilience isn’t about the size of your newsroom — it’s about the strength of your audience ties and the creativity of your monetization.

Ready to grow? Grab the free, printable 90-Day Growth Plan for Small Publishers and start building your audience and revenue today.

Daily Links: Sunday, Aug 10th, 2025

In my latest blog post, I dive into how Claude Code has become my go-to tool while experimenting with LLM programming agents. Despite a few hiccups, it has enabled me to create around 12 projects swiftly. If you’re curious about how it’s enhancing my programming workflow or about my offline AI workspace setup, check it out!

Daily Links: Thursday, Aug 7th, 2025

In my latest blog post, I dive into three intriguing topics! First, I explore why AI isn’t turning us into 10x engineers, despite the hype. Then, I share my journey of ditching Google’s search for the more user-friendly Kagi. Finally, I introduce KittenTTS, a cutting-edge text-to-speech model that’s impressively compact. Join me as we unravel these fascinating subjects!

Daily Links: Tuesday, Aug 5th, 2025

In my latest blog post, I share insights on two fascinating articles. Dive into “The Making of D.” to unravel the intriguing backstory, and explore “The Dollar is Dead” to learn about the factors leading to the potential decline of the U.S. dollar. Whether you’re into deep dives or financial foresight, I’ve covered something for you!