What Hearst’s AI Playbook Can Teach Smaller Newsrooms

I first came across this in the BoSacks newsletter. The original article — Hearst Newspapers leverages AI for a human-centred strategy by Paula Felps at INMA — lays out how Hearst is rolling out AI across its network.

Now, you might be thinking: “That’s great for a chain with San Francisco-based innovation teams and a dozen staffers dedicated to new tools… but what about us smaller or niche outlets that don’t have a DevHub?”

That’s exactly why this is worth paying attention to. Hearst’s approach isn’t just about expensive tech — it’s about structure, guardrails, and culture. Those translate no matter the newsroom size.

Hearst’s AI Guiding Principles

✅ What We Do

  • Embrace generative AI responsibly.
  • Stay aligned with Legal and leadership.
  • Involve newsrooms and journalists across the organization.
  • Create scalable tools that help journalists.
  • Keep humans deeply involved.

🚫 What We Don’t Do

  • Tarnish our brands for quick wins.
  • Mass-publish AI-generated slop.
  • Mislead our audience or avoid transparency.
  • Let bots run without oversight.
  • Do nothing out of fear of change.

Here’s the big picture:

  • Clear principles: They’ve drawn a hard line on what AI will and won’t do. It’s in writing. It’s shared. And everyone’s on the same page.
  • Human-first workflows: Every AI-assisted output gets human review. No shortcuts.
  • Small tools, big wins: Their AI isn’t all moonshots. Some of the biggest gains come from automating grunt work — things every newsroom wrestles with.

Why smaller newsrooms should take notes

  • You might not have a Slack-integrated bot like Hearst’s Producer-P, but you could set up a lightweight GPT workflow for headlines, SEO checks, or quick summaries.
  • You probably can’t scrape and transcribe every public meeting in the state, but you could start with one high-value local board or commission using free/cheap transcription paired with keyword alerts.
  • You might not launch a public-facing Chow Bot, but you could make a reader tool that solves one local pain point — from school board jargon busters to a property tax appeal explainer.

The secret here isn’t deep pockets — it’s intentional design. Hearst put thought into categories (digital production, news gathering, audience tools), built policies to match, and then trained their people. That part costs time, not millions.

As Tim O’Rourke of Hearst put it:

“We try to build around the expertise in our local newsrooms. That’s our value — not the tech.”

For smaller outlets, that’s the blueprint. Start with what you do best. Add AI where it can actually save time or uncover new reporting angles. Keep your humans in control. And make sure your audience always knows you value accuracy over speed.


Quick wins for small newsrooms

  • Write your own “What We Do / What We Don’t Do” AI policy in plain language.
  • Pick one workflow bottleneck and pilot an AI tool to tackle it.
  • Build an internal “AI tips” Slack channel or email chain to share wins and lessons.

You don’t need a DevHub to start. You just need a plan — and maybe the courage to experiment without losing sight of your values.

The Generative AI Paradox: How Small Businesses Can Win Without Tech Giant Budgets

There’s a fascinating New York Times article making the rounds about the “generative AI paradox” — the fact that corporate spending on AI is exploding, but the bottom-line payoff just isn’t there yet.

Big players like Microsoft, Amazon, and Google are raking in AI profits. Nvidia is selling chips like hotcakes. But for most companies, especially those outside the tech sector, AI is still in the “lots of money in, not much money out” stage. McKinsey says 80% of companies are using generative AI, but nearly the same number report no significant financial impact. That’s… sobering.

So if you’re a smaller firm, without billion-dollar budgets or a 60,000-person tech staff, how do you even begin to make AI work for you?

1. Think in Terms of Targeted Wins, Not Total Transformation

One big takeaway from the NYT piece is that the small wins are what’s sticking.

  • USAA uses AI to assist (not replace) call center staff.
  • Johnson Controls trims 10–15 minutes from repair jobs.
  • JPMorgan automates report drafting and data retrieval.

Notice what’s missing? Nobody’s saying, “AI replaced half our workforce and tripled our profits overnight.” These are micro-efficiencies that add up — and they’re exactly where smaller firms should focus.

For you, that might mean:

  • An AI tool that drafts first-pass proposals or reports.
  • Customer service chatbots that handle basic queries before a human steps in.
  • AI-powered search across your internal documents so your team stops reinventing the wheel.

2. Use the 80/20 Rule for Pilots

Here’s the trap smaller firms can fall into: thinking AI requires a big, complex rollout. It doesn’t. Start with one process where 80% of the work is repetitive and rules-based. Then, find an AI tool to chip away at that 80% — leaving your team to focus on the 20% that requires human judgment.

The NYT piece points out that 42% of AI pilot projects were abandoned last year. That’s not failure — it’s course correction. Shut down what’s not working quickly, but keep the lessons learned. Small firms can be nimbler than giants here, turning failed pilots into better second attempts.

3. Borrow the Big Guys’ Guardrails Without Their Bureaucracy

JPMorgan locked down security and data governance before expanding AI to 200,000 employees. Smaller firms can’t afford that scale, but you can still:

  • Keep sensitive data off public AI tools.
  • Choose platforms with strong privacy controls.
  • Train your team on what AI shouldn’t touch just as much as what it should.

The less you have to unwind later, the faster you can scale what works.

4. Invest in People Before Platforms

One overlooked point in the NYT reporting: many AI failures come from “human factors” — employee pushback, lack of skills, customer distrust. For a small business, a \$50,000 AI investment can be sunk in six months if the team doesn’t adopt it.

Sometimes the best first “AI spend” isn’t the tool — it’s the training. Even a few hours of workshops on prompt writing, tool selection, and workflow integration can double your ROI.


The Bottom Line
Small firms have a hidden advantage: agility. You’re not burdened by legacy systems, sprawling compliance departments, or shareholder expectations of quarterly AI miracles. You can try small, learn fast, and scale the wins that fit your business.

The generative AI paradox isn’t a reason to avoid AI — it’s a reason to approach it with focus and discipline. And in five years, when the tech giants are still “optimizing their AI stack,” you might already have a handful of well-oiled AI processes quietly making you more efficient every day.

Memorable Takeaway:
You don’t need a billion-dollar AI budget — you need a billion-dollar habit of finding and scaling the little wins.

What Small Publishers Can Learn from the Big Four’s AI-Defying Quarter

If you’ve been following the headlines, you might think AI is poised to hollow out the news business — stealing traffic, scraping archives, and churning out synthetic stories that compete with the real thing. And yet, four of America’s largest news organizations — Thomson Reuters, News Corp, People Inc (formerly Dotdash Meredith), and The New York Times — just turned in a combined \$5 billion in quarterly revenue and nearly \$1.2 billion in profit.

I first came across this coverage in the BoSacks newsletter, which linked to Press Gazette’s original report. The piece details how these companies aren’t just surviving in the AI era; they’re quietly reshaping their models to make it work for them. From AI-powered professional tools to content licensing deals with OpenAI, Amazon, and Meta, they’re finding ways to monetize their content and expand audience engagement — even as Google’s AI-driven search starts serving answers instead of links.

For smaller, niche publishers, the temptation is to shrug this off. “Sure, it’s easy when you have a billion-dollar brand and a legal department the size of my entire staff.” But there’s a lot here that is portable — if you focus on the right pieces.


Lesson 1: Own Your Audience Before AI Owns Your Traffic

One of the clearest takeaways from the big four is how much they’re investing in direct audience relationships. The New York Times hit 11.88 million subscribers, People Inc launched a dedicated app, and even News Corp’s Dow Jones division keeps climbing on digital subscriptions.

For small publishers, this means stop over-relying on algorithmic referrals. If you’re still counting on Facebook, Google, or Apple News as your main discovery channels, you’re building on borrowed land.

Action:

  • Launch a low-friction email newsletter that delivers high-value, niche-specific updates.
  • Incentivize sign-ups with premium extras — e-books, data sheets, or early access content.
  • Build community spaces (Discord, Slack, or forums) where your most engaged readers gather off-platform.

Lesson 2: Package Your Expertise as a Product, Not Just a Publication

Thomson Reuters isn’t just “doing news.” They’re integrating AI into products like CoCounsel, which bakes their proprietary legal and tax content into Microsoft 365 workflows. It’s sticky, high-margin, and hard for competitors to replicate.

Smaller publishers may not have the dev team to roll out enterprise-level AI tools, but the underlying idea applies: turn your content into something your audience uses, not just reads.

Action:

  • Convert your most-requested guides or reports into downloadable templates, toolkits, or training modules.
  • Create a searchable knowledge base for subscribers, updated with new insights monthly.
  • Partner with a lightweight AI platform to offer custom alerts or summaries in your niche.

Turn insights into income.

Don’t just read about what’s possible — start building it now. I’ve put together a free, printable 90-Day Growth Plan for Small Publishers with simple, actionable steps you can follow today to grow your audience and revenue.


Lesson 3: Monetize Your Archives and Protect Your IP

Both the NYT and News Corp are in legal battles over AI scraping, but they’re also cutting deals to license their content. The message is clear: your back catalog is an asset — treat it like one.

For small publishers, this could mean licensing niche datasets, syndicating evergreen content to allied outlets, or even creating curated “best of” packages for corporate training or education markets.

Action:

  • Audit your archive for evergreen, high-demand topics.
  • Explore licensing or syndication deals with industry associations, trade schools, or niche platforms.
  • Add clear terms of use and copyright notices to protect your content from unauthorized scraping.

Lesson 4: Diversify Revenue Beyond Ads

People Inc is replacing declining print dollars with more profitable digital and e-commerce revenue. The Times is making real money from games, cooking, and even video spin-offs of podcasts.

Smaller publishers don’t need a NYT-sized portfolio to diversify. You just need a second or third income stream that aligns with your audience’s interests.

Action:

  • Launch a paid resource library with niche-specific data, tools, or premium reports.
  • Run virtual events, webinars, or training sessions for a fee.
  • Sell targeted sponsorships or native content in newsletters instead of relying solely on display ads.

The Bottom Line

AI disruption is real — and it’s already changing how readers find and consume news. But the big players are showing that with strong brands, direct audience relationships, and smart product diversification, you can turn the threat into an advantage.

For smaller publishers, the scale is different but the playbook is the same:

  • Control your audience pipeline.
  • Turn your expertise into products.
  • Protect and monetize your archives.
  • Don’t bet your survival on one revenue stream.

It’s not about matching the NYT’s resources; it’s about matching their mindset. In the AI era, the publishers who think like product companies — and treat their audience like customers instead of traffic — will be the ones still standing when the algorithms shift again.

Memorable takeaway: In the AI age, resilience isn’t about the size of your newsroom — it’s about the strength of your audience ties and the creativity of your monetization.

Ready to grow? Grab the free, printable 90-Day Growth Plan for Small Publishers and start building your audience and revenue today.

How Smaller Companies Can Start Using AI—Real Lessons Inspired by Shopify

When Shopify’s CEO Tobi Lütke encouraged everyone at his company to make AI tools a natural part of their work, it wasn’t a sudden shift. It came after years of careful preparation—building the right culture, legal framework, and infrastructure. While smaller companies don’t have Shopify’s scale or budget, they can still learn a lot from how Shopify approached AI adoption and adapt it to their own realities.

Start by assembling a cross-functional pilot team of five to seven people—a sales rep, someone from customer support, perhaps an engineer or operations lead. Give this group a modest budget, around $5,000, and 30 days to demonstrate whether AI can help solve real problems. Set clear goals upfront: maybe cut the time it takes to respond to customer emails by 20%, automate parts of sales prospect research to save two hours a week, or reduce repetitive manual data entry in operations by 30%. This focus helps avoid chasing shiny tools without a real payoff.

You don’t need to build your own AI platform or hire data scientists to get started. Many cloud AI services today offer pay-as-you-go pricing, so you can experiment without huge upfront investments. For example, a small customer support team might subscribe to ChatGPT for a few hundred dollars a month and connect it to their helpdesk software to draft faster, more personalized email replies. A sales team could create simple automations with no-code tools like Zapier that pull prospect data from LinkedIn, run it through an AI to generate email drafts, and send them automatically. These kinds of workflows often take less than a week to set up and can improve efficiency by 30% or more.

As you experiment, keep a close eye on costs. API calls add up quickly, and a small team making thousands of requests each month might see unexpected bills over $1,000 if you’re not careful. Make sure to monitor usage and set sensible limits during your pilot.

Using AI responsibly means setting some basic ground rules early. Include someone from legal or compliance in your pilot to help create simple guidelines. For instance, never feed sensitive or personally identifiable customer information into AI tools unless it’s properly masked or anonymized. Also, require human review of AI-generated responses before sending them out, at least during your early adoption phase. This “human-in-the-loop” approach catches errors and builds trust.

Training people to use AI effectively is just as important as the tools themselves. Instead of long, formal classes, offer hands-on workshops where your teams can try AI tools on their real daily tasks. Encourage everyone to share what worked and what didn’t, and identify “AI champions” who can help their teammates navigate challenges. When managers and leaders openly use AI themselves and discuss its benefits and limitations, it sets a powerful example that using AI is part of how work happens now.

Tracking results doesn’t require fancy analytics. A simple Google Sheet updated weekly can track how many AI requests team members make, estimate time saved on tasks, and note changes in customer satisfaction. If the pilot isn’t delivering on its goals after 30 days, pause and rethink before expanding.

Keep in mind common pitfalls along the way. Don’t rush to automate complex workflows without testing—early AI outputs can be inaccurate or biased. Don’t assume AI will replace deep expertise; it’s a tool to augment human judgment, not a substitute. And don’t overlook data privacy—sending customer information to third-party AI providers without proper agreements can lead to compliance headaches.

Shopify’s success came from building trust with their legal teams, investing in infrastructure that made AI accessible, and carefully measuring how AI use related to better work outcomes. While smaller companies might not create internal AI proxies or sophisticated dashboards, they can still embrace that spirit: enable access, encourage experimentation, and measure what matters.

By starting with a focused pilot, using affordable tools, setting simple but clear usage rules, training through hands-on practice, and watching the results carefully, your company can unlock AI’s potential without unnecessary risk or wasted effort.

Systems Thinking & the Bitter Lesson: Building Adaptable AI Workflows

In “Learning the Bitter Lesson,” Lance Martin reminds us that in AI—and really in any complex system—the simplest, most flexible designs often win out over time. As a systems thinker, I can’t help but see this as more than just an AI engineering memo; it’s a blueprint for how we build resilient, adaptable organizations and workflows.


Why Less Structure Feels Paradoxically More Robust
I remember the first time we tried to optimize our team’s editorial pipeline. We had checklists, rigid approval stages, and dozens of micro-processes—each put in place with good intentions. Yet every time our underlying software or staffing shifted, the whole thing groaned under its own weight. It felt eerily similar to Martin’s early “orchestrator-worker” setup: clever on paper, but brittle when real-world conditions changed.

Martin’s shift—from hardcoded workflows to multi-agent systems, and finally to a “gather context, then write in one shot” approach—mirrors exactly what many of us have lived through. You add structure because you need it: constrained compute, unreliable tools, or just the desire for predictability. Then, slowly, that structure calcifies into a bottleneck. As tool-calling got more reliable and context windows expanded, his pipeline’s parallelism became a liability. The cure? Remove the scaffolding.


Seeing the Forest Through the Trees
Here’s the systems-thinking nugget: every piece of scaffolding you bolt onto a process is a bet on the current state of your environment. When you assume tool-calling will be flaky, you build manual checks; when you assume parallelism is the fastest path, you partition tasks. But every bet has an expiration date. The real power comes from designing systems whose assumptions you can peel away like old wallpaper, rather than being forced to rip out the entire house.

In practical terms, that means:

  1. Mapping Your Assumptions: List out “why does this exist?” for every major component. Is it there because we needed it six months ago, or because we still need it today?
  2. Modular “Kill Switches”: Build in feature flags or toggles that let you disable old components without massive rewrites. If your confidence in a new tool goes up, you should be able to flip a switch and remove the old guardrails.
  3. Feedback Loops Over Checklists: Instead of imagining every exception, focus on rapid feedback. Let the system fail fast, learn, and self-correct, rather than trying to anticipate every edge case.

From Code to Culture
At some point, this philosophy goes beyond architecture diagrams and hits your team culture. When we start asking, “What can we remove today?” we encourage experimentation. We signal that it’s OK to replace yesterday’s best practice with today’s innovation. And maybe most importantly, we break the inertia that says “if it ain’t broke, don’t fix it.” Because in a world where model capabilities double every few months, “not broken” is just the lull before old code bites you in production.


Your Next Steps

  • Inventory Your Bottlenecks: Take ten minutes tomorrow to jot down areas where your team or tech feels sluggish. Are any of those due to legacy workarounds?
  • Prototype the “One-Shot” Mindset: Pick a small project—maybe a weekly report or simple dashboard—and see if you can move from multi-step pipelines to single-pass generation.
  • Celebrate the Removals: Host a mini “structure cleanup” retro. Reward anyone who finds and dismantles an outdated process or piece of code.

When you peel back the layers, “Learning the Bitter Lesson” isn’t just about neural nets and giant GPUs—it’s about embracing change as the only constant. By thinking in systems, you’ll recognize that the paths we carve today must remain flexible for tomorrow’s terrain. And in that flexibility lies true resilience.

If you’d like to dive deeper into the original ideas, I encourage you to check out Learning the Bitter Lesson by Lance Martin—an essential read for anyone building the next generation of AI-driven systems.

Ride the AI Wave: Strategic Integration Over Litigation

Combined Strategic View – Forward-Looking Angle (Rooted in Bo Sacks’ Facts)

In his newsletter BoSacks Speaks Out: Notes from the Algorithmic Frontline, veteran editor Bo Sacks lays out a stark reality: AI has already ingested decades of Pulitzer-winning journalism without compensation; Judge Alsup’s ruling against Anthropic offers only a narrow copyright reprieve; Getty Images is pioneering revenue-sharing for AI-trained image datasets; and niche print titles like Monocle, Air Mail, and Delayed Gratification thrive even as legacy printers and binderies collapse. These are the hard facts on the ground.

These facts point to a stark choice: fight the tide or ride it. Relentlessly suing OpenAI or Anthropic over scraped archives may score headlines, but it won’t keep pace with machine learning’s breakneck advance—and it diverts precious resources from innovation. Instead, forward-thinking publishers should turn Bo Sacks’ own evidence into a blueprint for growth:


1. Automate & Accelerate

  • Archive Mining: Apply AI to sift your own backfiles—precisely the content under dispute—to surface timeless stories worth republishing or expanding.
  • Bite-Sized Briefs: Convert long features into “5-minute reads” or multimedia snippets for mobile audiences, mirroring slow-print curation but optimized for screens.

2. Elevate Craft with AI

  • Instant Fact-Checks: Use AI assistants that cross-verify claims on the fly, speeding up verification without sacrificing accuracy.
  • Rapid Design Mockups: Integrate AI-powered layout previews to iterate cover and spread designs in minutes, recapturing the precision Bo Sacks mourns in lost binderies.

3. Data-Informed Revenue

  • Smart Pricing: Leverage real-time engagement signals to adjust sponsorship and ad rates dynamically—echoing Getty’s revenue-share ethos but tailored to your audience.
  • Segmented Offers: Use simple clustering techniques to distinguish your premium-print devotees from casual readers, then craft subscription tiers and perks that drive loyalty and lifetime value.

Why this matters: The tools Bo Sacks warns are “already at home” in our archives have upended every stage of publishing—from discovery and design to distribution and monetization. Legal victories may buy time, but strategic integration of AI buys relevance. By running small pilots, measuring impact on both costs and engagement, and retiring manual processes that no longer move the needle, publishers can turn today’s adversary into tomorrow’s catalyst—and deliver the richer, more personalized journalism readers are hungry for.

LLMs Today: What’s Really New, and What’s Just Polished?

If you follow AI, you know the story: every few months, a new language model drops with more parameters and splashier headlines. But as Sebastian Raschka highlights in “The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design,” the biggest lesson from this new wave of open-source LLMs is how much has not fundamentally changed. Underneath it all, the progress is less about radical reinvention and more about clever architectural tweaks—optimizing memory, attention, and stability to make bigger, faster, and more efficient models.

At the core, the 2017 transformer blueprint is still powering everything. What’s new? A handful of impactful upgrades:

  • Smarter attention (like Multi-Head Latent Attention and Grouped-Query Attention) slashes memory requirements.
  • Mixture-of-Experts (MoE) lets trillion-parameter giants run without melting your GPU by only activating a fraction of the network at a time.
  • Sliding window attention makes long contexts feasible without hogging resources.
  • Normalization tricks (RMSNorm, Post-Norm, etc.) are now essential for training stability at scale.

Today’s best open models—DeepSeek, Kimi, Llama 4, Gemma, OLMo 2, Qwen3—are all remixing these tools. The differences are in the fine print, not the fundamentals.

But what about OpenAI’s GPT-4/4o or Anthropic’s Claude 3.5? While the specifics are secret, it’s a safe bet their architectures look similar: transformer backbone, MoE scaling, memory-efficient attention, plus their own proprietary speed and safety hacks. Their big edge is polish, robust APIs, multimodal support, and extra safety layers—perfect if you need instant results and strong guardrails.

So, which should you pick?

  • Want transparency, customization, or on-prem deployment? Open models like OLMo 2, Qwen3, or Gemma 3 have you covered.
  • Building for research or scale (and have massive compute)? Try DeepSeek or Kimi K2.
  • Need to serve millions, fast? Lighter models like Mistral Small or Gemma 3n are your friend.
  • If you want the “it just works” experience with best-in-class safety and features, OpenAI and Anthropic are still top choices—just expect less control and no deep customization.

In the end, all the excitement is really about optimization, not paradigm shifts. Progress now means making LLMs faster, stabler, and easier to use. Or as Raschka puts it: “Despite all the tweaks and headline-grabbing parameters, we’re still standing on the same transformer foundation—progress comes from tuning the architecture, not tearing it down.”

If you want the deep technical dive, read Raschka’s full “The Big LLM Architecture Comparison.” Otherwise, just remember: the transformer era isn’t over—it’s just getting a whole lot more interesting.

In Defense of Sharing AI Output: Why “AI Slop” Isn’t the End of Meaningful Communication

Rethinking proof-of-thought, noise, and the upside of a more open AI culture.


Is sharing ChatGPT output really so rude?
A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt it as your own or have explicit consent from the recipient.

It’s a provocative take, inspired by Peter Watts’ Blindsight, and it raises important questions about authenticity, value, and digital trust. But does it go too far? Is all AI-generated text “slop”? Is every forward or paste a violation of etiquette?

Let’s consider another perspective—one that recognizes the risks but also sees the immense value and potential of a world where AI-generated output is more freely shared.

“Proof-of-Thought” Was Always a Mirage

The essay’s nostalgia for a lost era of “proof-of-thought” is understandable. But let’s be honest: not every piece of human writing was ever insightful, intentional, or even useful. Spam, boilerplate, PR releases, and perfunctory office emails have existed for decades—long before AI.
Authenticity and attention have always required discernment, not just faith in the medium.

AI may have made text cheap, but it has also made ideas more accessible and the barriers to entry lower. That’s not a bug—it’s a feature.

Sharing AI Output: Consent, Context, and Creativity

Of course, etiquette matters. But to frame sharing AI text as inherently rude or even hostile misses some crucial points:

  • AI output can be informative, creative, and valuable in its raw form. Sometimes a bot’s phrasing or approach offers a new angle, and sharing that output can accelerate understanding, brainstorming, or problem-solving.
  • Explicit adoption isn’t always practical. If I ask ChatGPT to summarize a dense technical paper or translate a snippet of code, sometimes the fastest, most honest way to help a friend or colleague is to share that result directly—with attribution.
  • Consent can be implicit in many contexts. In tech, research, and online forums, sharing logs, code snippets, or even entire AI chats is often expected and welcomed—especially when transparency and reproducibility are important.

The Upside of “AI Slop”: Accessibility, Efficiency, and Learning

What the “anti-slop” argument underplays is just how much AI has democratized expertise and lowered the cost of curiosity:

  • Non-native speakers can get better drafts or translations instantly.
  • Students and self-learners can access tailored explanations without waiting for a human expert.
  • Developers and researchers can rapidly prototype, debug, and collaborate with a global community, often using AI-generated code or documentation as a starting point.

Yes, there’s more noise. But there’s also far more signal for many people who were previously shut out of certain conversations.

Trust and Transparency, Not Gatekeeping

Rather than discouraging the sharing of AI output, we should focus on transparency. Label AI-generated text clearly. Foster norms where context—why, how, and for whom AI was used—is always provided. Give people the choice and the tools to ignore or engage as they see fit.

Blanket prohibitions or shame about sharing AI content risk re-erecting barriers we’ve only just started to dismantle.

Questions for the Future

  • How do we build systems that help us filter valuable AI output from true “slop”?
  • What new forms of collaborative authorship—human + AI—will emerge, and how do we credit them?
  • How can we leverage AI to reduce noise, not just add to it?

A Call for a More Open, Nuanced AI Etiquette

AI is here to stay, and its output will only become more sophisticated and pervasive. The solution isn’t to retreat or treat all shared AI text as digital poison. It’s to develop a culture of honesty, clarity, and context—so that AI can amplify, rather than degrade, our collective intelligence.

So yes: share your ChatGPT output—just tell me where it came from. Let’s make etiquette about agency, not anxiety.

Balancing Technology and Humanity: A Guide to Purposeful Growth in the AI Era

Some content on this blog is developed with AI assistance tools like Claude.

In an age where technological advancement accelerates by the day, many of us find ourselves at a crossroads: How do we embrace innovation while staying true to our core values? How can we leverage new tools without losing sight of what makes our work—and our lives—meaningful? This question has become particularly pressing as artificial intelligence transforms industries, reshapes professional landscapes, and challenges our understanding of creativity and productivity.

Drawing from insights across multiple disciplines, this guide offers a framework for navigating this complex terrain. Whether you’re a professional looking to stay relevant, a content creator seeking to stand out, or simply someone trying to make sense of rapid change, these principles can help you chart a course toward purposeful growth that balances technological adoption with human connection and impact.

Understanding Technology as an Enhancement Tool

The narrative around technology—particularly AI—often centers on replacement and obsolescence. Headlines warn of jobs being automated away, creative work being devalued, and human skills becoming redundant. But this perspective misses a crucial insight: technology’s greatest value comes not from replacing human effort but from enhancing it.

From Replacement to Augmentation

Consider the case of Rust Communications, a media company that found success not by replacing journalists with AI but by using AI to enhance their work. By training their systems on historical archives, they gave reporters access to deeper context and institutional knowledge, allowing them to focus on what humans do best: asking insightful questions, building relationships with sources, and applying critical judgment to complex situations.

This illustrates a fundamental shift in thinking: rather than asking, “What can technology do instead of me?” the more productive question becomes, “How can technology help me do what I do, better?”

Practical Implementation

This mindset shift opens up possibilities across virtually any field:

  • In creative work: AI tools can handle routine formatting tasks, suggest alternative approaches, or help identify blind spots in your thinking—freeing you to focus on the aspects that require human judgment and creative vision.
  • In knowledge work: Automated systems can gather and organize information, track patterns in data, or draft preliminary analyses—allowing you to devote more time to synthesis, strategy, and communication.
  • In service roles: Digital tools can manage scheduling, documentation, and routine follow-ups—creating more space for meaningful human interaction and personalized attention.

The key is to approach technology adoption strategically, identifying specific pain points or constraints in your work and targeting those areas for enhancement. This requires an honest assessment of where you currently spend time on low-value tasks that could be automated or augmented, and where your unique human capabilities add the most distinctive value.

Developing a Strategic Approach to Content and Information

As content proliferates and attention becomes increasingly scarce, thoughtful approaches to creating, organizing, and consuming information become critical professional and personal skills.

The Power of Diversification

Data from publishing industries shows a clear trend: organizations that diversify their content strategies across formats and channels consistently outperform those that remain narrowly focused. This doesn’t mean pursuing every platform or trend, but rather thoughtfully expanding your approach based on audience needs and behavior.

Consider developing a content ecosystem that might include:

  • Long-form written content for depth and authority
  • Audio formats like podcasts for accessibility and multitasking audiences
  • Visual elements that explain complex concepts efficiently
  • Interactive components that increase engagement and retention

The goal isn’t to create more content but to create more effective content by matching format to function and audience preference.

The Value of Structure and Categorization

As AI systems play an increasingly important role in content discovery and recommendation, structure becomes as important as substance. Content that is meticulously categorized, clearly structured, and designed with specific audience segments in mind will receive preferential treatment in algorithmic ecosystems.

Practical steps include:

  • Developing consistent taxonomies for your content or knowledge base
  • Creating clear information hierarchies that signal importance and relationships
  • Tagging content with relevant metadata that helps systems understand context and relevance
  • Structuring information to appeal to specific audience segments with definable characteristics

This approach benefits not only discovery but also your own content development process, as it forces clarity about purpose, audience, and context.

Cultivating Media Literacy

The flip side of strategic content creation is purposeful consumption. As information sources multiply and traditional gatekeepers lose influence, the ability to evaluate sources critically becomes an essential skill.

Developing robust media literacy involves:

  • Diversifying news sources across political perspectives and business models
  • Understanding the business models that fund different information sources
  • Recognizing common patterns of misinformation and distortion
  • Supporting quality journalism through subscriptions and engagement

This isn’t merely about avoiding misinformation; it’s about developing a rich, nuanced understanding of complex issues by integrating multiple perspectives and evaluating claims against evidence.

Leveraging Data and Knowledge Assets

Every individual and organization possesses unique information assets—whether formal archives, institutional knowledge, or accumulated experience. In a knowledge economy, the ability to identify, organize, and leverage these assets becomes a significant competitive advantage.

Mining Archives for Value

Hidden within many organizations are treasure troves of historical data, past projects, customer interactions, and institutional knowledge. These archives often contain valuable insights that can inform current work, reveal patterns, and provide context that newer team members lack.

The process of activating these assets typically involves:

  1. Systematic assessment of what historical information exists
  2. Strategic digitization of the most valuable materials
  3. Thoughtful organization using consistent metadata and taxonomies
  4. Integration with current workflows through searchable databases or knowledge management systems

Individual professionals can apply similar principles to personal knowledge management, creating systems that help you retain and leverage your accumulated learning and experience.

Building First-Party Data Systems

Organizations that collect, analyze, and apply proprietary data consistently outperform those that rely primarily on third-party information. This “first-party data”—information gathered directly from your audience, customers, or operations—provides unique insights that can drive decisions, improve offerings, and create additional value through partnerships.

Effective first-party data strategies include:

  • Identifying the most valuable data points for your specific context
  • Creating ethical collection mechanisms that provide clear value exchanges
  • Developing analysis capabilities that transform raw data into actionable insights
  • Establishing governance frameworks that protect privacy while enabling innovation

Even individuals and small teams can benefit from systematic approaches to gathering feedback, tracking results, and analyzing patterns in their work and audience responses.

Applying Sophisticated Decision Models

As data accumulates and contexts become more complex, simple intuitive decision-making often proves inadequate. More sophisticated approaches—like stochastic models that account for uncertainty and variation—can significantly improve outcomes in situations ranging from financial planning to product development.

While the mathematical details of these models can be complex, the underlying principles are accessible:

  • Embrace probability rather than certainty in your planning
  • Consider multiple potential scenarios rather than single forecasts
  • Account for both expected outcomes and variations
  • Test decisions against potential extremes not just average cases

These approaches help create more robust strategies that can weather unpredictable events and capture upside potential while mitigating downside risks.

Aligning Work with Purpose and Community

Perhaps the most important theme emerging from diverse sources is the centrality of purpose and community connection to sustainable success and satisfaction. Technical skills and technological adoption matter, but their ultimate value depends on how they connect to human needs and values.

Balancing Innovation and Values

Organizations and individuals that successfully navigate technological change typically maintain a clear sense of core values and purpose while evolving their methods and tools. This doesn’t mean resisting change, but rather ensuring that technological adoption serves rather than subverts fundamental principles.

The process involves regular reflection on questions like:

  • How does this new approach or tool align with our core values?
  • Does this innovation strengthen or weaken our connections to the communities we serve?
  • Are we adopting technology to enhance our core purpose or simply because it’s available?
  • What guardrails need to be in place to ensure technology serves our values?

This reflective process helps prevent the common pattern where means (technology, metrics, processes) gradually displace ends (purpose, impact, values) as the focus of attention and decision-making.

Assessing Growth for Impact

A similar rebalancing applies to personal and professional development. The self-improvement industry often promotes growth for its own sake—more skills, more productivity, more achievement. But research consistently shows that lasting satisfaction comes from growth that connects to something beyond self-interest.

Consider periodically auditing your development activities with questions like:

  • Is this growth path helping me contribute more effectively to others?
  • Am I developing capabilities that address meaningful needs in my community or field?
  • Does my definition of success include positive impact beyond personal achievement?
  • Are my learning priorities aligned with the problems that most need solving?

This doesn’t mean abandoning personal ambition or achievement, but rather connecting it to broader purpose and impact.

Starting Local

While global problems often command the most attention, the most sustainable impact typically starts close to home. The philosopher Mòzǐ captured this principle simply: “Does it benefit people? Then do it. Does it not benefit people? Then stop.”

This local focus might involve:

  • Supporting immediate family and community needs through childcare, elder support, or neighborhood organization
  • Applying professional skills to local challenges through pro bono work or community involvement
  • Building relationships that strengthen social fabric in your immediate environment
  • Creating local systems that reduce dependency on distant supply chains

These local connections not only create immediate benefit but also build the relationships and resilience that sustain longer-term efforts and larger-scale impact.

Integrating Technology and Humanity: A Balanced Path Forward

The themes explored above converge on a central insight: the most successful approaches to contemporary challenges integrate technological capability with human purpose. Neither luddite resistance nor uncritical techno-optimism serves us well; instead, we need thoughtful integration that leverages technology’s power while preserving human agency and values.

This balanced approach involves:

  1. Selective adoption of technologies that enhance your distinctive capabilities
  2. Strategic organization of content and knowledge to leverage both human and machine intelligence
  3. Purposeful collection and analysis of data that informs meaningful decisions
  4. Regular reflection on how technological tools align with core values and purpose
  5. Consistent connection of personal and professional growth to community benefit

No single formula applies to every situation, but these principles provide a framework for navigating the complex relationship between technological advancement and human flourishing.

The organizations and individuals who thrive in coming years will likely be those who master this integration—leveraging AI and other advanced technologies not as replacements for human judgment and creativity, but as amplifiers of human capacity to solve problems, create value, and build meaningful connections.

In a world increasingly shaped by algorithms and automation, the distinctive value of human judgment, creativity, and purpose only grows more essential. By approaching technological change with this balanced perspective, we can build futures that harness innovation’s power while remaining true to the values and connections that give our work—and our lives—meaning.

What steps will you take to implement this balanced approach in your own work and life?