What Small Publishers Can Learn from the Big Four’s AI-Defying Quarter

If you’ve been following the headlines, you might think AI is poised to hollow out the news business — stealing traffic, scraping archives, and churning out synthetic stories that compete with the real thing. And yet, four of America’s largest news organizations — Thomson Reuters, News Corp, People Inc (formerly Dotdash Meredith), and The New York Times — just turned in a combined \$5 billion in quarterly revenue and nearly \$1.2 billion in profit.

I first came across this coverage in the BoSacks newsletter, which linked to Press Gazette’s original report. The piece details how these companies aren’t just surviving in the AI era; they’re quietly reshaping their models to make it work for them. From AI-powered professional tools to content licensing deals with OpenAI, Amazon, and Meta, they’re finding ways to monetize their content and expand audience engagement — even as Google’s AI-driven search starts serving answers instead of links.

For smaller, niche publishers, the temptation is to shrug this off. “Sure, it’s easy when you have a billion-dollar brand and a legal department the size of my entire staff.” But there’s a lot here that is portable — if you focus on the right pieces.


Lesson 1: Own Your Audience Before AI Owns Your Traffic

One of the clearest takeaways from the big four is how much they’re investing in direct audience relationships. The New York Times hit 11.88 million subscribers, People Inc launched a dedicated app, and even News Corp’s Dow Jones division keeps climbing on digital subscriptions.

For small publishers, this means stop over-relying on algorithmic referrals. If you’re still counting on Facebook, Google, or Apple News as your main discovery channels, you’re building on borrowed land.

Action:

  • Launch a low-friction email newsletter that delivers high-value, niche-specific updates.
  • Incentivize sign-ups with premium extras — e-books, data sheets, or early access content.
  • Build community spaces (Discord, Slack, or forums) where your most engaged readers gather off-platform.

Lesson 2: Package Your Expertise as a Product, Not Just a Publication

Thomson Reuters isn’t just “doing news.” They’re integrating AI into products like CoCounsel, which bakes their proprietary legal and tax content into Microsoft 365 workflows. It’s sticky, high-margin, and hard for competitors to replicate.

Smaller publishers may not have the dev team to roll out enterprise-level AI tools, but the underlying idea applies: turn your content into something your audience uses, not just reads.

Action:

  • Convert your most-requested guides or reports into downloadable templates, toolkits, or training modules.
  • Create a searchable knowledge base for subscribers, updated with new insights monthly.
  • Partner with a lightweight AI platform to offer custom alerts or summaries in your niche.

Turn insights into income.

Don’t just read about what’s possible — start building it now. I’ve put together a free, printable 90-Day Growth Plan for Small Publishers with simple, actionable steps you can follow today to grow your audience and revenue.


Lesson 3: Monetize Your Archives and Protect Your IP

Both the NYT and News Corp are in legal battles over AI scraping, but they’re also cutting deals to license their content. The message is clear: your back catalog is an asset — treat it like one.

For small publishers, this could mean licensing niche datasets, syndicating evergreen content to allied outlets, or even creating curated “best of” packages for corporate training or education markets.

Action:

  • Audit your archive for evergreen, high-demand topics.
  • Explore licensing or syndication deals with industry associations, trade schools, or niche platforms.
  • Add clear terms of use and copyright notices to protect your content from unauthorized scraping.

Lesson 4: Diversify Revenue Beyond Ads

People Inc is replacing declining print dollars with more profitable digital and e-commerce revenue. The Times is making real money from games, cooking, and even video spin-offs of podcasts.

Smaller publishers don’t need a NYT-sized portfolio to diversify. You just need a second or third income stream that aligns with your audience’s interests.

Action:

  • Launch a paid resource library with niche-specific data, tools, or premium reports.
  • Run virtual events, webinars, or training sessions for a fee.
  • Sell targeted sponsorships or native content in newsletters instead of relying solely on display ads.

The Bottom Line

AI disruption is real — and it’s already changing how readers find and consume news. But the big players are showing that with strong brands, direct audience relationships, and smart product diversification, you can turn the threat into an advantage.

For smaller publishers, the scale is different but the playbook is the same:

  • Control your audience pipeline.
  • Turn your expertise into products.
  • Protect and monetize your archives.
  • Don’t bet your survival on one revenue stream.

It’s not about matching the NYT’s resources; it’s about matching their mindset. In the AI era, the publishers who think like product companies — and treat their audience like customers instead of traffic — will be the ones still standing when the algorithms shift again.

Memorable takeaway: In the AI age, resilience isn’t about the size of your newsroom — it’s about the strength of your audience ties and the creativity of your monetization.

Ready to grow? Grab the free, printable 90-Day Growth Plan for Small Publishers and start building your audience and revenue today.

Daily Links: Sunday, Aug 10th, 2025

In my latest blog post, I dive into how Claude Code has become my go-to tool while experimenting with LLM programming agents. Despite a few hiccups, it has enabled me to create around 12 projects swiftly. If you’re curious about how it’s enhancing my programming workflow or about my offline AI workspace setup, check it out!

Daily Links: Thursday, Aug 7th, 2025

In my latest blog post, I dive into three intriguing topics! First, I explore why AI isn’t turning us into 10x engineers, despite the hype. Then, I share my journey of ditching Google’s search for the more user-friendly Kagi. Finally, I introduce KittenTTS, a cutting-edge text-to-speech model that’s impressively compact. Join me as we unravel these fascinating subjects!

Daily Links: Tuesday, Aug 5th, 2025

In my latest blog post, I share insights on two fascinating articles. Dive into “The Making of D.” to unravel the intriguing backstory, and explore “The Dollar is Dead” to learn about the factors leading to the potential decline of the U.S. dollar. Whether you’re into deep dives or financial foresight, I’ve covered something for you!

Why Most Design Docs Fail—And How to Write One That Won’t

If you’ve ever slogged through pages of technical jargon and bullet points nobody will remember, you know why most design docs flop. They aren’t just boring—they’re useless. Not because the writers can’t write, but because they’re thinking about the wrong things.

This guide is for engineers, tech leads, and architects who want their design docs to move the project forward—not just check a process box.

A note on inspiration:
This approach owes a debt to Grant Slatton’s “How to Write a Good Design Document”, which nails the fundamentals of design doc clarity. The advice below builds on his framework, shaped by mistakes and lessons from my own experience in the trenches.


Why Design Docs Go Nowhere

After years of watching projects stumble or stall despite “approved” design docs, these are the five failure patterns I see again and again:

  1. Compliance over conviction: Docs get written because the process says so, not because anyone is fighting for the best solution.
  2. Ignoring the audience: Authors assume everyone’s already on board, equally informed, and cares as much as they do.
  3. Dodging hard questions: Tough trade-offs and risks are buried under jargon or skipped entirely.
  4. Cleverness over clarity: Writers want to look smart, not make the real issues obvious.
  5. First-draft laziness: The first version gets shipped; nobody cuts, nobody questions what’s essential.

These aren’t writing problems. They’re thinking problems. So here’s how to think differently about writing your next design doc—starting with making the stakes real.


The Design Doc Gut-Check Checklist

1. Make the Stakes Concrete

  • [ ] Quantify consequences: Spell out what happens if you do nothing—lost revenue, increased tech debt, user churn, missed deadlines. Put real numbers or examples on the pain.
  • [ ] Draw boundaries: Say explicitly what’s out of scope or not being solved.

2. Force Yourself to Surface Risks and Trade-offs

  • [ ] List two alternatives and why you’re not choosing them: If you can’t, you’re not thinking hard enough.
  • [ ] Call out unknowns and riskiest assumptions: Write down what could break, what you don’t know, and what would cause this plan to fail. Make your discomfort visible—don’t let someone else expose it in review.

3. Preempt Objections—Before Anyone Else Can

  • [ ] Write down the three hardest questions you expect in review, and answer them in the doc: Don’t wait for someone to grill you in the meeting.
  • [ ] Assume every reviewer is skeptical: If you were in a rival team’s shoes, would you buy this argument?

4. Ruthlessly Cut and Clarify

  • [ ] Trim 30% from your first draft: If you can’t, ask yourself which section you’d defend in a five-minute elevator pitch—and cut the rest.
  • [ ] One idea per paragraph, one sentence summary: If a paragraph can’t be compressed to a single, clear sentence, rewrite it.
  • [ ] Put calculations, benchmarks, and technical specs in an appendix: Keep your main argument uncluttered and easy to follow.

5. Finish with Commitment and Clarity

  • [ ] Be explicit about next steps, owners, and triggers for a redesign: Don’t end on a shrug—define accountability.
  • [ ] Define success and failure with a metric, timeline, or scenario: No hedging.
  • [ ] Test it on someone who wasn’t in the planning meetings: If they don’t get it, neither will your reviewers.

From Experience

I’ve watched launches stall for weeks because a design doc assumed everyone would “just know” about a scaling bottleneck. Nobody called it out directly, so when things failed in QA, everyone acted surprised. If it’s not written down, it doesn’t exist.

On the other hand, the strongest docs I’ve seen say: “Here’s what we know, here’s what we don’t, and here’s how we’ll handle it if we’re wrong.” They made people nervous in review—but those nerves forced the right conversations and saved months down the line.


The Real Test

Write the doc you’d want to inherit from someone else—when the deadlines are real, the system is groaning, and all the easy assumptions have disappeared. If your design doc doesn’t make you a little uncomfortable to write, it won’t be compelling to read.


Major credit to Grant Slatton’s original article, which covers the mechanics with clarity and precision. This checklist aims to push those fundamentals further, turning them into habits you’ll actually use when it matters.

How Smaller Companies Can Start Using AI—Real Lessons Inspired by Shopify

When Shopify’s CEO Tobi Lütke encouraged everyone at his company to make AI tools a natural part of their work, it wasn’t a sudden shift. It came after years of careful preparation—building the right culture, legal framework, and infrastructure. While smaller companies don’t have Shopify’s scale or budget, they can still learn a lot from how Shopify approached AI adoption and adapt it to their own realities.

Start by assembling a cross-functional pilot team of five to seven people—a sales rep, someone from customer support, perhaps an engineer or operations lead. Give this group a modest budget, around $5,000, and 30 days to demonstrate whether AI can help solve real problems. Set clear goals upfront: maybe cut the time it takes to respond to customer emails by 20%, automate parts of sales prospect research to save two hours a week, or reduce repetitive manual data entry in operations by 30%. This focus helps avoid chasing shiny tools without a real payoff.

You don’t need to build your own AI platform or hire data scientists to get started. Many cloud AI services today offer pay-as-you-go pricing, so you can experiment without huge upfront investments. For example, a small customer support team might subscribe to ChatGPT for a few hundred dollars a month and connect it to their helpdesk software to draft faster, more personalized email replies. A sales team could create simple automations with no-code tools like Zapier that pull prospect data from LinkedIn, run it through an AI to generate email drafts, and send them automatically. These kinds of workflows often take less than a week to set up and can improve efficiency by 30% or more.

As you experiment, keep a close eye on costs. API calls add up quickly, and a small team making thousands of requests each month might see unexpected bills over $1,000 if you’re not careful. Make sure to monitor usage and set sensible limits during your pilot.

Using AI responsibly means setting some basic ground rules early. Include someone from legal or compliance in your pilot to help create simple guidelines. For instance, never feed sensitive or personally identifiable customer information into AI tools unless it’s properly masked or anonymized. Also, require human review of AI-generated responses before sending them out, at least during your early adoption phase. This “human-in-the-loop” approach catches errors and builds trust.

Training people to use AI effectively is just as important as the tools themselves. Instead of long, formal classes, offer hands-on workshops where your teams can try AI tools on their real daily tasks. Encourage everyone to share what worked and what didn’t, and identify “AI champions” who can help their teammates navigate challenges. When managers and leaders openly use AI themselves and discuss its benefits and limitations, it sets a powerful example that using AI is part of how work happens now.

Tracking results doesn’t require fancy analytics. A simple Google Sheet updated weekly can track how many AI requests team members make, estimate time saved on tasks, and note changes in customer satisfaction. If the pilot isn’t delivering on its goals after 30 days, pause and rethink before expanding.

Keep in mind common pitfalls along the way. Don’t rush to automate complex workflows without testing—early AI outputs can be inaccurate or biased. Don’t assume AI will replace deep expertise; it’s a tool to augment human judgment, not a substitute. And don’t overlook data privacy—sending customer information to third-party AI providers without proper agreements can lead to compliance headaches.

Shopify’s success came from building trust with their legal teams, investing in infrastructure that made AI accessible, and carefully measuring how AI use related to better work outcomes. While smaller companies might not create internal AI proxies or sophisticated dashboards, they can still embrace that spirit: enable access, encourage experimentation, and measure what matters.

By starting with a focused pilot, using affordable tools, setting simple but clear usage rules, training through hands-on practice, and watching the results carefully, your company can unlock AI’s potential without unnecessary risk or wasted effort.

Daily Links: Saturday, Aug 2nd, 2025

In this blog post, I explore the keys to being a wise optimist when it comes to science and technology. I’ll delve into how to balance hope and realism, fostering a mindset that embraces innovation while staying grounded. Visit the link to read more and join me on this insightful journey!

Beyond the AI Boost: The Human Frontier of Mastery

Based on “AI is a Floor Raiser, not a Ceiling Raiser”

Excerpt:
When AI tools deliver instant scaffolding and context‑aware answers, beginners and side‑projecters can sprint past the usual startup slog. But no shortcut replaces the mountain‑high effort needed for true mastery and dark horse novelty.


I first stumbled across Elroy Bot’s incisive piece on AI’s new role in learning and product development while wrestling with a gnarly bug in my side project. Within minutes, I had a working patch—courtesy of an AI assistant—but the real insight hit me afterward: AI didn’t conquer the problem; it simply handed me a ladder to climb the first few rungs.

In the article, the author frames AI as a “floor raiser”—a force that lifts novices and busy managers to basic proficiency at blinding speed. Yet, when it comes to reaching the ceiling of deep expertise or crafting truly novel works, AI still lags behind.

Why the Floor Rises Faster

  • Personalized On‑Demand Coaching: Instead of scouring StackOverflow for a snippet, AI answers your question in context, at your level. You start coding frameworks or understanding new concepts in hours, not weeks.
  • Automating the Mundane: Boilerplate code, rote research, and template tasks get handled by AI, freeing you to focus on the pieces that actually matter.
  • Bridging Gaps in Resources: AI tailors explanations to your background—no more hunting for that one tutorial that links your existing skills to the new framework you’re tackling.

“For engineering managers and side‑projecters, AI is the difference between a product that never existed and one that ships in days.”

Why the Ceiling Isn’t Coming Down

Despite these boosts, mastering a large legacy codebase or producing a blockbuster-quality creative work still demands:

  1. Deep Context: AI doesn’t grasp your business’s ten-year-old quirks or proprietary requirements.
  2. Novelty & Creativity: Audiences sniff out derivative content; true originality still springs from human intuition.
  3. Ethical and Critical Judgment: Complex or controversial subjects require source vetting and nuanced reasoning—areas where AI’s training data can mislead.

Balancing the Ecosystem

The ripple effects are already visible:

  • Teams lean on AI to prototype faster, shifting headcount from boilerplate work to high‑value innovation.
  • Training programs must evolve: pairing AI‑powered tutoring with hands‑on mentorship to prevent skill atrophy.
  • Organizations that overinvest in AI floor-raising without nurturing their human “ceiling climbers” risk plateauing at mediocrity.

AI may give you the ladder, but only your creativity, judgment, and perseverance will carry you to the summit. Use these tools to clear the base camp—then keep climbing toward true mastery, where human insight still reigns supreme.

Systems Thinking & the Bitter Lesson: Building Adaptable AI Workflows

In “Learning the Bitter Lesson,” Lance Martin reminds us that in AI—and really in any complex system—the simplest, most flexible designs often win out over time. As a systems thinker, I can’t help but see this as more than just an AI engineering memo; it’s a blueprint for how we build resilient, adaptable organizations and workflows.


Why Less Structure Feels Paradoxically More Robust
I remember the first time we tried to optimize our team’s editorial pipeline. We had checklists, rigid approval stages, and dozens of micro-processes—each put in place with good intentions. Yet every time our underlying software or staffing shifted, the whole thing groaned under its own weight. It felt eerily similar to Martin’s early “orchestrator-worker” setup: clever on paper, but brittle when real-world conditions changed.

Martin’s shift—from hardcoded workflows to multi-agent systems, and finally to a “gather context, then write in one shot” approach—mirrors exactly what many of us have lived through. You add structure because you need it: constrained compute, unreliable tools, or just the desire for predictability. Then, slowly, that structure calcifies into a bottleneck. As tool-calling got more reliable and context windows expanded, his pipeline’s parallelism became a liability. The cure? Remove the scaffolding.


Seeing the Forest Through the Trees
Here’s the systems-thinking nugget: every piece of scaffolding you bolt onto a process is a bet on the current state of your environment. When you assume tool-calling will be flaky, you build manual checks; when you assume parallelism is the fastest path, you partition tasks. But every bet has an expiration date. The real power comes from designing systems whose assumptions you can peel away like old wallpaper, rather than being forced to rip out the entire house.

In practical terms, that means:

  1. Mapping Your Assumptions: List out “why does this exist?” for every major component. Is it there because we needed it six months ago, or because we still need it today?
  2. Modular “Kill Switches”: Build in feature flags or toggles that let you disable old components without massive rewrites. If your confidence in a new tool goes up, you should be able to flip a switch and remove the old guardrails.
  3. Feedback Loops Over Checklists: Instead of imagining every exception, focus on rapid feedback. Let the system fail fast, learn, and self-correct, rather than trying to anticipate every edge case.

From Code to Culture
At some point, this philosophy goes beyond architecture diagrams and hits your team culture. When we start asking, “What can we remove today?” we encourage experimentation. We signal that it’s OK to replace yesterday’s best practice with today’s innovation. And maybe most importantly, we break the inertia that says “if it ain’t broke, don’t fix it.” Because in a world where model capabilities double every few months, “not broken” is just the lull before old code bites you in production.


Your Next Steps

  • Inventory Your Bottlenecks: Take ten minutes tomorrow to jot down areas where your team or tech feels sluggish. Are any of those due to legacy workarounds?
  • Prototype the “One-Shot” Mindset: Pick a small project—maybe a weekly report or simple dashboard—and see if you can move from multi-step pipelines to single-pass generation.
  • Celebrate the Removals: Host a mini “structure cleanup” retro. Reward anyone who finds and dismantles an outdated process or piece of code.

When you peel back the layers, “Learning the Bitter Lesson” isn’t just about neural nets and giant GPUs—it’s about embracing change as the only constant. By thinking in systems, you’ll recognize that the paths we carve today must remain flexible for tomorrow’s terrain. And in that flexibility lies true resilience.

If you’d like to dive deeper into the original ideas, I encourage you to check out Learning the Bitter Lesson by Lance Martin—an essential read for anyone building the next generation of AI-driven systems.