AI in the Newsroom: Why It Should Be Your Smartest Intern, Not Your Star Reporter

Practical AI tools and governance tips for small and niche newsrooms that want smarter reporting, not robot reporters.

If you’ve been anywhere near a journalism conference in the past year, you’ve probably heard the AI hype: “It’s going to replace reporters.” “It’s the future of investigative journalism.” “It’s going to write all our stories for us.”

But here’s the reality check, courtesy of journalist-technologist Jaemark Tordecilla — someone who’s actually been in the trenches building AI for newsrooms. In a recent INMA piece, Tordecilla put it plainly: AI is a terrible journalist. It doesn’t chase leads, smell a rat, or spot the story between the lines. What it does do exceptionally well is the grunt work — the sifting, sorting, and summarizing that lets you get to the important stuff faster.

And that’s the mental shift small and niche news organizations need to make: stop asking AI to be the reporter, and start asking it to make your reporters’ jobs easier.


Tools That Complement, Not Replace, Human Skill

If you’re running a small newsroom with limited staff, think of AI as your hyper-efficient intern — one that doesn’t sleep, doesn’t take lunch breaks, and doesn’t mind doing the boring bits.

Here are a few practical tools you could build or adopt:

  • Data Sifters
    AI models that can ingest giant PDF reports, meeting transcripts, or spreadsheets and spit out bullet-point summaries or proposed headlines. Your reporter glances at the output and decides if it’s worth a deeper dive.
  • Budget Chatbots
    Exactly like Tordecilla’s tool for “chatting” with the Philippines’ 700,000-line national budget. For local publishers, this could mean feeding your city or county budget into an AI tool and asking questions like: How much did we spend on police overtime last year? or Which departments’ budgets increased the most?
  • Pattern Spotters
    Tools that flag anomalies or trends in datasets — e.g., tracking how often a government department awards contracts to the same vendor, or how property sales spike in certain neighborhoods.
  • Fast-Format Converters
    AI-assisted workflows that can take a long-form investigative article and quickly produce a podcast script, social video captions, or illustrated explainers. The key: these formats should be reviewed and fine-tuned by humans before publishing.

The Governance Question: Who’s Driving This Thing?

If AI is going to become part of your newsroom’s workflow, you need rules of the road. For small and niche publishers, governance doesn’t have to be a 40-page corporate policy, but it does need to answer some core questions:

  • Transparency: Will you disclose when AI is used in research, production, or content creation? How?
  • Attribution: Who “owns” AI-generated outputs in your newsroom — and how do you credit sources if AI pulls from third-party data?
  • Bias Checks: How will you review AI-generated summaries or insights for skew, especially when dealing with politically sensitive topics?
  • Ethical Boundaries: Where will you not use AI? (For example, generating deepfake-like images of people, or creating composite quotes.)
  • Review Protocol: Who signs off on AI-assisted work before it goes public? Even small teams should have a second set of eyes on anything AI touches.

A lightweight governance structure might be as simple as a one-page “AI Use Policy” taped to the newsroom wall. The important part is that everyone knows the rules — and follows them.


Why This Matters for Small Newsrooms

Big national outlets can afford to burn cycles experimenting with AI. You probably can’t. That’s why your AI playbook should focus on high-leverage tasks: the work that’s essential but time-consuming, where AI can give you a multiplier effect without compromising your credibility.

The payoff? More time for your reporters to be out in the community, making calls, filing FOIA requests, and doing the human work AI can’t touch.


Memorable Takeaway:
“AI is good at finding patterns in data; humans are good at finding meaning in those patterns. Keep it that way.”

Daily Links: Wednesday, Aug 13th, 2025

In this post, I explore some intriguing links I’ve been reading. You’ll find insights on Apple’s upcoming Siri updates and supply-chain strategies courtesy of Bloomberg. There’s also a fantastic free guide on improving your social skills. If you’re a tech enthusiast, you’ll love how Claude Code enhances my work and fun. Plus, discover how my quest for the perfect to-do app led me back to the simplicity of a .txt file!

Stop Arguing, Start Asking: Why Prompt Literacy is the Next Universal Skill

AI isn’t magic. It’s not malicious. It’s not even confused.
It’s a tool — and like any tool, what you get out of it depends on how you use it.

Two recent takes — Linda Ruth’s Stop Arguing with AI: Prompting for Power in the Publishing World and Kelvin Chan’s AP piece One Tech Tip: Get the most out of ChatGPT and other AI chatbots with better prompts — arrive at the same destination from different roads. One speaks to editors and publishers; the other to everyday AI users. But the core message is identical: the quality of your AI output starts and ends with the quality of your input. (Hat tip to the always-essential BoSacks newsletter where I first spotted both articles.)

From the Newsroom to Your Laptop: Same Rule, Different Context

Ruth frames AI as part of a publishing professional’s toolkit — right up there with headline writing and layout design. If you ask a model for feedback on a manuscript without providing the manuscript, expect nonsense in return. It’s like asking a book reviewer to critique a novel they haven’t read.

Chan’s advice mirrors this in broader strokes: skip vague prompts, give clear goals, and feed the model context and constraints. Add personas to shape tone, specify your audience, and don’t be afraid to iterate. The first prompt is rarely the last.

The Practitioner’s Mindset

Whether you’re an editor, marketer, small business owner, or teacher, three habits will instantly improve your AI game:

  1. Provide context — the more background you give, the better the results.
  2. Set constraints — word count, format, style — so you get something usable.
  3. Iterate — treat AI as a collaborator, not a vending machine.

Think of AI as a “brilliant but distractible employee”: give it structure, keep it focused, and check its work.

The Bigger Picture

The skeptic will say this is common sense — ask better questions, get better answers — and they’re right. But prompt literacy is becoming a baseline skill, much like search literacy was twenty years ago. The contrarian might argue AI should adapt to us, not the other way around. The systems thinker sees a familiar pattern: early adopters learn the machine’s language, then the tools evolve until the complexity disappears behind the scenes.

Until that happens, prompt engineering is the bridge between what AI can do and what it will actually do for you.


Turn questions into results. Don’t just wonder what AI can do — start guiding it. Download my free, printable AI Prompt Quick Guide for proven prompt formulas you can use today.


Action Steps You Can Use Today

  • Create a personal or team prompt library for recurring tasks.
  • Refine in conversation — don’t settle for the first draft.
  • Experiment with personas and audiences to see how the output shifts.
  • Always verify — a polished answer can still be wrong.

In short: Master the prompt, master the tool — and in mastering the tool, you expand your reach.

Crisis Mode: Why Small Publishers Should See Opportunity in the Chaos

Every now and then, I read something from a media veteran that feels like it’s aimed right at the big players — but still lands squarely in the lap of small and niche publishers. That’s exactly what happened with Chris Duncan’s upcoming keynote at the FIPP World Media Congress.

Duncan’s career is full of steering through storms — launching The Times on the iPad (when that was brand new territory), leading through COVID, and now advising on the AI tidal wave that’s hitting every corner of publishing. His core message? Publishing thrives in crisis.

Now, “thrives” might feel like a stretch if you’re running a three-person operation and trying to keep the lights on. But here’s where the small guys might actually have an edge: when the ground shifts under everyone, agility beats scale.


What small publishers should take away

1. AI isn’t just a newsroom curiosity — it’s a traffic problem.
Yes, AI tools can help you cut costs and automate grunt work. But Duncan’s warning is clear: generative AI could cut off more referral traffic than Google already has. For small publishers, that means you can’t afford to be a “search-dependent” business. Your audience has to remember you and seek you out.

2. Innovation isn’t optional.
He’s blunt: mobile journalism hasn’t seen much truly new since about 2012. That’s both sobering and exciting. If you’re a niche publisher, you don’t need to outspend The New York Times — you need to outthink them in your lane. That might mean interactive features, audio companions to your stories, or even an “insider’s app” for your core audience.

3. The platform era is shifting — be ready.
Duncan thinks we’re past the peak of Google and Meta’s dominance. That’s a rare window to build distribution without depending entirely on them. When big platforms are distracted by regulators and market shifts, you can make a move to deepen your direct audience connections.


Where to put your focus next

Here are three action items I think every small or niche publisher should put on their whiteboard after reading Duncan’s comments:

  1. Build direct audience pipelines.
    Start or double down on newsletters, podcasts, private communities, or events. Make sure your readers’ path to your content doesn’t depend on an algorithm.
  2. Test one “genuinely new” product feature in the next year.
    Could be a micro-app, an interactive archive, or a new storytelling format. The goal is to prove you can innovate without waiting for the industry to hand you a playbook.
  3. Scenario-plan for a search traffic cliff.
    If your Google referrals dropped 50% tomorrow, how would you adapt? Do that planning now while you have the luxury of time.

Duncan’s not saying this will be easy — far from it. But he is saying that urgency forces experimentation, and experimentation is where breakthroughs happen. For small publishers, the trick is to use your speed, focus, and audience intimacy as weapons in this fight.

You may not have a “war room” of strategists, but you do have something the giants often lack: a direct line to a loyal audience that cares deeply about your coverage. That’s your moat. Guard it, grow it, and use this crisis moment to get a little scrappy.

If you want the full keynote preview, it’s worth a read: Publishers work best in some form of crisis.


Takeaway for the fridge:
Crisis is coming. The question is — will you let it happen to you, or will you make it work for you?

Daily Links: Tuesday, Aug 12th, 2025

In my latest blog post, I dive into the architectural advancements from GPT-2 to gpt-oss and compare them with Qwen3. I also spotlight “bolt,” a high-performance, real-time optimized, statically typed embedded language written in C. It’s an exciting exploration into cutting-edge AI and coding technologies that I think you’ll enjoy!

MCP’s Simplicity is a Feature… Until It’s a Disaster

Back in college, I worked on a project where our front-end app talked to the back end through XML/RPC.
It wasn’t glamorous. XML was verbose, the tooling was clunky by today’s standards, and debugging often meant wading through nested tags that looked like an ancient library card catalog.
But here’s the thing: we had well-defined calls, structured data, and clear documentation.
Every function was described. Every parameter had a type. If you tried to pass an integer where a string belonged, the compiler—or the generated stubs—caught it. That wasn’t just nice for development; it meant our API could be reused, secured, and maintained without someone having to reverse-engineer it later.

Fast-forward to today, and we have the Model Context Protocol (MCP), being pitched as the “USB-C for AI tools.” In theory, it’s a universal connector between AI agents and the APIs or services they need. In practice? Julien Simon’s recent piece, Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises, makes a pretty convincing case that MCP is ignoring everything XML/RPC, CORBA, SOAP, gRPC, and others already taught us.

Julien lays it out bluntly:

  • Type safety? MCP uses schemaless JSON, with optional hints nobody enforces.
  • Cross-language consistency? Each implementation is on its own—Python’s JSON isn’t JavaScript’s JSON, and good luck with float precision.
  • Security? OAuth arrived years too late, and even now only for HTTP.
  • Observability? Forget distributed tracing—you’re back to grepping logs like it’s 1999.
  • Cost tracking? None. You’ll just get a big bill and a mystery as to why.

This isn’t just an “engineers grumbling about elegance” problem. It’s a real-world operational risk. Enterprises adopting MCP today are baking in fragility: AI services making millions of calls without retries, without version control, without guarantees about what data comes back. Julien calls it the “patchwork protocol” problem—critical features aren’t in MCP itself, but scattered across third-party extensions. That’s how you end up with multiple teams using slightly different auth libraries that don’t interoperate, each needing its own audit.

If anything, the simplicity of MCP right now is exactly what makes it dangerous. It’s fast to integrate—just JSON over a transport—but that same minimalism hides the fact that the tooling isn’t ready for high-stakes production. In the AI gold rush, “move fast and break things” isn’t just a motto; it’s a business plan. But when what breaks is a healthcare AI’s medication dosing recommendation or a bank’s trading logic, the stakes are far higher than a crashed demo.

From my own XML/RPC days, I can say this: structure, enforced contracts, and predictable behavior might feel like overhead when you’re building a prototype. But in production? They’re the guardrails that keep you from careening off a cliff at 70 miles an hour.

MCP doesn’t need to turn into CORBA’s kitchen-sink complexity, but it does need to grow up—fast. Schema versioning, built-in tracing, strong type validation, standardized error handling, and cost attribution should be table stakes, not wishlist items. Otherwise, we’re just re-learning the same painful lessons our predecessors solved decades ago.


Read Julien Simon’s full article here: Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises

Coding with AI on a Budget — My Structured, Multi-Model Workflow

I’m a heavy AI user, but I don’t just open ten tabs and throw the same question at every model I can find. My workflow is deliberate, and every model has a defined role in the process.

It’s not that the “AI buffet” style is wrong — in fact, this excellent guide is a perfect example of that approach. But my own style is more like a relay race: each model runs its leg, then hands off to the next, so no one’s doing a job they’re bad at.


Phase 1 — Discovery & Requirements (ChatGPT as the Architect)

When I’m starting something new, I begin with long back-and-forth Q\&A sessions in ChatGPT.

  • The goal is to turn fuzzy ideas into clear, testable requirements.
  • I’ll ask “what if?” questions, explore trade-offs, and refine scope until I have a solid first draft of requirements and functional specs.

Why ChatGPT? Because it’s great at idea shaping and structured writing — and I can quickly iterate without burning expensive tokens.


Phase 2 — Critique & Refinement (Gemini as the Critic)

Once I have a draft, I hand it over to Gemini 2.5 Pro.

  • Gemini acts like a tough peer reviewer — it questions assumptions, spots gaps, and points out edge cases.
  • I take Gemini’s feedback back to ChatGPT for edits.
  • We repeat this loop until the document is solid enough to hand off to implementation.

This step makes the coding phase dramatically smoother — Claude Code gets a blueprint, not a napkin sketch.


Phase 3 — Implementation (Claude Code as the Builder)

With specs locked in, I move to Claude Code for the actual build.

  • I prep the context using a tool like AI Code Prep GUI to include only the relevant files.
  • Claude Code follows instructions well when the brief is crisp and the noise is low.
  • This is where the investment in phases 1 and 2 pays off — Claude isn’t guessing, it’s executing.

Phase 4 — Specialist Consultations (Free & Budget Models)

If something tricky comes up — a gnarly bug, architectural uncertainty — I call in a specialist.

  • For deep problem-solving: o3, GLM 4.5, Qwen3 Coder, Kimi K2, or DeepSeek R1.
  • For alternate perspectives: Poe.com’s Claude 4, OpenRouter’s o4-mini, or Perplexity for research.
  • The point is diagnosis, not doing the build work.

These models are often free (with daily credits or token grants) and help me avoid overusing paid API calls.


Why This Works for Me

  • Role clarity: Each model does the job it’s best at.
  • Lower costs: Expensive models are reserved for hard, high-value problems.
  • Better output: The spec→critique→build pipeline reduces rework.
  • Adaptability: I can swap out models as free-tier offers change.

Closing Thoughts

AI isn’t magic — you are.
The tools only work as well as the process you put them in. For me, that process is structured and deliberate. If you prefer a more exploratory, multi-tab style, check out this free AI coding guide for an alternate perspective. Both approaches can work — the important thing is to know why you’re using each model, and when to pass the baton.

Three Tiny Thoughts Worth Carrying Into the Week

Inspired by the Farnam Street newsletter, Brain Food – August 10, 2025

Every week, the Brain Food newsletter from Farnam Street drops a “Tiny Thoughts” section — short, sharp ideas that stick with you. This week’s three were especially good:

  1. Telling yourself you’ll do it tomorrow is how dreams die.
  2. The problem with success is that it teaches you the wrong lessons. What worked yesterday becomes religion, and religions don’t adapt.
  3. Good decision-making isn’t about being right all the time. It’s about lowering the cost of being wrong and changing your mind. When mistakes are cheap, you can move fast and adapt. Make mistakes cheap, not rare.

Procrastination kills momentum, success can calcify into dogma, and fear of mistakes can freeze progress. The common cure is movement — doing the thing today, questioning yesterday’s formulas, and lowering the stakes so you can act and adapt quickly. Make action a habit, flexibility a strength, and mistakes a tool for learning.


If you want more like this, you can subscribe to Farnam Street’s excellent Brain Food newsletter.

2025: The Year the Modern World Lost Its Mind

What our current tech-saturated moment has in common with the age of bicycles, Model Ts, and nervous breakdowns

In 1910, the world was gripped by the whir of engines, the shimmer of skyscrapers, and the idea that maybe—just maybe—modern life was moving too fast for the human mind to handle. In 2025, the scenery has changed—electric cars instead of Model Ts, AI chatbots instead of Kodak cameras—but the sensation? Uncannily familiar.

This piece was inspired by Derek Thompson’s excellent essay, “1910: The Year the Modern World Lost Its Mind”, which explores how the early 20th century’s technological vertigo mirrors our own moment. Reading it felt less like a history lesson and more like holding up a mirror to 2025.

This is the year the modern world, in its infinite wisdom, decided to sprint into the future with no map, no seatbelt, and a half-charged phone battery. We’re living at warp speed, and everyone’s nervously checking to see if anyone else feels queasy.


1. The Speed Problem

If the early 1900s had the bicycle craze, we’ve got the everything craze. The 2020s are a blur of quarterly product launches, algorithm updates, and overnight viral trends. Your phone isn’t just in your pocket—it’s in your bloodstream. News cycles collapse into hours; global events live and die in the span of a long lunch break.

Like in 1910, speed isn’t just mechanical—it’s existential. We aren’t moving faster to get somewhere. We’re moving faster so we don’t feel like we’re falling behind.


2. The Nervous Breakdown Economy

A century ago, doctors diagnosed “American Nervousness” among white-collar workers who couldn’t keep pace with the new tempo of life. Today, we’ve just swapped the sanatorium for Slack. “Burnout” is our word for it, but the symptoms are eerily similar: fatigue, anxiety, a sense of being perpetually “on call.”

Our workplaces run on real-time messages, constant notifications, and the lurking fear that AI might be both your assistant and your replacement. If in 1910 the railway clerk feared the telegraph machine, in 2025 the copywriter fears the autocomplete suggestion.


3. The Artistic Backlash

In 1910, Stravinsky, Kandinsky, and Picasso reached into the deep past to make sense of the machine age. In 2025, artists are doing the same—except the “past” might be analog film cameras, vinyl records, or hand-drawn zines. The hottest design aesthetic right now is “slightly broken,” as if imperfection itself is a protest against AI’s cold precision.

The more our tools can flawlessly mimic reality, the more we crave something they can’t—flaws, accidents, and human fingerprints.


4. Competing Theories of Human Nature

In the early 20th century, Max Weber thought modern work ethic was an extension of religious discipline. Freud thought it was a repression of primal urges. In 2025, we’re still arguing the same point—just swap “religion” for “productivity culture” and “primal urges” for “doomscrolling.”

Is the AI revolution the ultimate expression of human ingenuity or the ultimate suppression of it? Are we using these tools to expand our potential—or outsourcing so much of ourselves that we forget what we’re capable of?


Conclusion: The Loop We Can’t Escape

History isn’t a straight line—it’s a loop. In 1910, the world gasped at the pace of change, feared the toll it would take on the mind, and questioned whether our shiny new machines were serving us or hollowing us out. In 2025, we’re running the same circuit, just on faster, smarter, and more invisible tracks.

We tell ourselves that our anxieties are unique, that no one before has felt the strange cocktail of awe and dread that comes with watching the future arrive early. But the truth is, every generation has looked into the whirring heart of its own inventions and wondered if it built a better world—or just built a bigger cage.

The choice before us now isn’t whether technology will change us—it already has. The choice is whether we can meet that change with the same mix of creativity, resistance, and humanity that our predecessors brought to their own dizzying moment. If 1910 proved anything, it’s that even in the age of vertigo, we can still plant our feet—if we remember to look up from the blur and decide where we actually want to go.

Because if we don’t, “the year we lost our minds” won’t be a moment in history. It’ll be a permanent address.


Then & Now: Technology, Anxiety, and Culture

Category19102025
Breakthrough TechnologiesAutomobiles, airplanes, bicycles, skyscrapers, phonograph, Kodak cameraAI chatbots, EVs & self-driving cars, drones, mixed reality headsets, quantum computing
Pace of ChangeDecades of industrial innovations compressed into a few yearsContinuous, globalized tech updates delivered instantly
Cultural Anxiety“American Nervousness” (neurasthenia), fear of machines dehumanizing societyBurnout, “always-on” culture, fear of AI replacing human work
Moral PanicWomen on bicycles seen as socially and sexually disruptiveAI-generated art/writing seen as undermining human creativity
Artistic ReactionStravinsky’s The Rite of Spring, Kandinsky’s abstraction, Picasso’s primitivismAnalog revival (vinyl, film photography), glitch aesthetics, AI art critique
Intellectual DebateWeber: work ethic aligns with modern capitalism; Freud: modernity represses human instinctsProductivity culture vs. digital well-being; tech optimism vs. tech doom
Public SentimentAwe at progress, fear of losing humanityExcitement about AI’s potential, anxiety about its societal cost

What Small Publishers Can Learn from the Big Four’s AI-Defying Quarter

If you’ve been following the headlines, you might think AI is poised to hollow out the news business — stealing traffic, scraping archives, and churning out synthetic stories that compete with the real thing. And yet, four of America’s largest news organizations — Thomson Reuters, News Corp, People Inc (formerly Dotdash Meredith), and The New York Times — just turned in a combined \$5 billion in quarterly revenue and nearly \$1.2 billion in profit.

I first came across this coverage in the BoSacks newsletter, which linked to Press Gazette’s original report. The piece details how these companies aren’t just surviving in the AI era; they’re quietly reshaping their models to make it work for them. From AI-powered professional tools to content licensing deals with OpenAI, Amazon, and Meta, they’re finding ways to monetize their content and expand audience engagement — even as Google’s AI-driven search starts serving answers instead of links.

For smaller, niche publishers, the temptation is to shrug this off. “Sure, it’s easy when you have a billion-dollar brand and a legal department the size of my entire staff.” But there’s a lot here that is portable — if you focus on the right pieces.


Lesson 1: Own Your Audience Before AI Owns Your Traffic

One of the clearest takeaways from the big four is how much they’re investing in direct audience relationships. The New York Times hit 11.88 million subscribers, People Inc launched a dedicated app, and even News Corp’s Dow Jones division keeps climbing on digital subscriptions.

For small publishers, this means stop over-relying on algorithmic referrals. If you’re still counting on Facebook, Google, or Apple News as your main discovery channels, you’re building on borrowed land.

Action:

  • Launch a low-friction email newsletter that delivers high-value, niche-specific updates.
  • Incentivize sign-ups with premium extras — e-books, data sheets, or early access content.
  • Build community spaces (Discord, Slack, or forums) where your most engaged readers gather off-platform.

Lesson 2: Package Your Expertise as a Product, Not Just a Publication

Thomson Reuters isn’t just “doing news.” They’re integrating AI into products like CoCounsel, which bakes their proprietary legal and tax content into Microsoft 365 workflows. It’s sticky, high-margin, and hard for competitors to replicate.

Smaller publishers may not have the dev team to roll out enterprise-level AI tools, but the underlying idea applies: turn your content into something your audience uses, not just reads.

Action:

  • Convert your most-requested guides or reports into downloadable templates, toolkits, or training modules.
  • Create a searchable knowledge base for subscribers, updated with new insights monthly.
  • Partner with a lightweight AI platform to offer custom alerts or summaries in your niche.

Turn insights into income.

Don’t just read about what’s possible — start building it now. I’ve put together a free, printable 90-Day Growth Plan for Small Publishers with simple, actionable steps you can follow today to grow your audience and revenue.


Lesson 3: Monetize Your Archives and Protect Your IP

Both the NYT and News Corp are in legal battles over AI scraping, but they’re also cutting deals to license their content. The message is clear: your back catalog is an asset — treat it like one.

For small publishers, this could mean licensing niche datasets, syndicating evergreen content to allied outlets, or even creating curated “best of” packages for corporate training or education markets.

Action:

  • Audit your archive for evergreen, high-demand topics.
  • Explore licensing or syndication deals with industry associations, trade schools, or niche platforms.
  • Add clear terms of use and copyright notices to protect your content from unauthorized scraping.

Lesson 4: Diversify Revenue Beyond Ads

People Inc is replacing declining print dollars with more profitable digital and e-commerce revenue. The Times is making real money from games, cooking, and even video spin-offs of podcasts.

Smaller publishers don’t need a NYT-sized portfolio to diversify. You just need a second or third income stream that aligns with your audience’s interests.

Action:

  • Launch a paid resource library with niche-specific data, tools, or premium reports.
  • Run virtual events, webinars, or training sessions for a fee.
  • Sell targeted sponsorships or native content in newsletters instead of relying solely on display ads.

The Bottom Line

AI disruption is real — and it’s already changing how readers find and consume news. But the big players are showing that with strong brands, direct audience relationships, and smart product diversification, you can turn the threat into an advantage.

For smaller publishers, the scale is different but the playbook is the same:

  • Control your audience pipeline.
  • Turn your expertise into products.
  • Protect and monetize your archives.
  • Don’t bet your survival on one revenue stream.

It’s not about matching the NYT’s resources; it’s about matching their mindset. In the AI era, the publishers who think like product companies — and treat their audience like customers instead of traffic — will be the ones still standing when the algorithms shift again.

Memorable takeaway: In the AI age, resilience isn’t about the size of your newsroom — it’s about the strength of your audience ties and the creativity of your monetization.

Ready to grow? Grab the free, printable 90-Day Growth Plan for Small Publishers and start building your audience and revenue today.