Daily Links: Wednesday, Jul 30th, 2025

In my latest post, I dive into a range of fascinating topics, from creating my own ultra-fast game streaming video codec called PyroWave, to navigating the evolving tech landscape with AI tools! I also explore the changing role of design in the AI era, why some projects demand lots of energy, and principles for production AI agents. Join me on this tech-fueled journey!

A Personal Take on Driving AI Adoption (and Why Mindset Matters More Than Tech)

I recently discovered Yue Zhao’s insightful article, “What Most Leaders of AI Get Wrong About Driving Adoption” and was reminded how often the human side of change gets overlooked. As an AI advocate, I’ve seen even the most promising initiatives stall—not because the technology failed, but because people weren’t ready.

Why Technical Focus Alone Isn’t Enough
It’s tempting to believe that once teams learn the latest AI tools, adoption will naturally follow. Yet time and again, projects falter not for lack of skill but because fear and uncertainty go unaddressed. When people feel anxious about what AI means for their roles, they hesitate to experiment or speak up—even when the technology could help them thrive.

Three Simple Shifts with Big Impact
Yue outlines a change-management approach that puts people first. Here’s how I’m applying it:

  1. Acknowledge and Address Fear. Instead of glossing over concerns, create dedicated forums—like quick “AI myth-busting” discussions—where everyone can voice questions and get clear answers. It demystifies the technology and validates genuine worries.
  2. Share Your Thinking. Transparency builds trust. I maintain a lightweight “AI decision diary” that outlines which tools we’re evaluating, why, and what trade-offs matter. This openness invites feedback and keeps everyone aligned.
  3. Build Together. Co-creation beats top-down edicts every time. Host hands-on sprints with diverse team members to prototype AI-enabled workflows. Even a short, focused session can spark ideas that stick—and foster ownership.

Real-World Reflections
After running these inclusive sessions with various teams, I’ve seen a noticeable shift: participants move from skepticism to genuine curiosity. The simple act of co-designing experiences turns apprehension into enthusiasm.

Why This Matters for You
True AI adoption isn’t about deploying the flashiest model; it’s about empathy and collaboration. When you weave in conversations about fear, share your rationale openly, and invite people into the process early, you transform AI from a mandate into a shared opportunity.

Your Turn
What’s the biggest roadblock you’ve faced when introducing AI? Reply with your experiences, and let’s explore solutions together.

Partnering with AI: How I Learned to Let Claude Code Handle the Busywork

In Sajal Sharma’s insightful guide Working Effectively with AI Coding Tools like Claude Code, she distills how AI assistants shift our role from writing every line to orchestrating systems. Inspired by her lessons, I dove in by asking Claude Code to scaffold a simple customer CRUD API—endpoints, models, tests, even CI workflows. Seconds later, I had files everywhere. Excited? Absolutely. Prepared for quality checks? Not yet.

That moment mirrored one of Sajal’s key points: AI clears the runway, but humans still pilot the plane. It can generate code at lightning speed, but someone has to ensure alignment with security policies, performance budgets, and team conventions. In short, AI is the horsepower; we’re the pilots.

The Real Magic Trick: Specs Before Code

Jumping into a prompt like “Build customer API” is like ordering a pizza without toppings—you might get bread and sauce, but no pepperoni. Taking a cue from Sajal’s spec-first approach, I always start by drafting a clear spec in /docs/specs/.

Here’s a slice of my customer-api-spec.md:

# Customer API Spec

- CRUD operations for Customer entity
- Fields: id, name, email, createdAt, updatedAt
- Input validation: email regex, name length
- 200 vs 404 vs 400 status codes
- Logging: structured JSON with requestId
- Rate limiting: 100 reqs/min per IP

Then I prompt: “Claude, scaffold the customer API based on customer-api-spec.md.” The result closely matches my intentions—no unwanted extra toppings.

Why You Must Play Code Quality Cop

Sajal warns that vague prompts often lead to shortcuts: any types, skipped tests, or generic error responses. I block 30 minutes every Friday for my “AI Code Audit” sprint. I scan new files for weak typings, missing edge-case tests, and logging inconsistencies. Then I ask Claude Code: “Please refactor duplicate helpers into a shared module and enforce our error-handling middleware.” It’s like giving your codebase a weekly checkup.

Double-Checking with a Second Brain

As Sajal recommends, no single LLM should have the final word. For thorny questions—say, whether to shard the Customer table—I generate a plan with Claude Code, then run it by GPT-4o. It’s like having two senior engineers politely debate over which approach scales best.

When both agree, I move forward. That extra validation step takes minutes, but Sajal shares how it’s saved her from invisible tech-debt traps more times than she can count.

From Boilerplate to Brainwork

With the busywork automated, I follow Sajal’s advice: I spend my time on strategy—mentoring teammates, aligning with product goals, and making key architectural decisions. For instance, when our data team needed a real-time import pipeline, Claude drafted ETL scripts in seconds, but only I knew our SLA: analytics data must surface within two minutes. So I guided the solution toward streaming events instead of batch jobs.

Your Turn: One Tiny Experiment

Inspired by Sajal’s guide, pick one experiment for your next sprint:

  • Draft a spec first. Create a one-page markdown with clear requirements.
  • Audit weekly. Reserve 30 minutes to review AI-generated code.
  • Seek a second opinion. Validate your plan with another LLM.

Share your spec or prompt in the comments—let’s build better workflows together!


AI coding tools aren’t a gimmick—they’re a paradigm shift. As Sajal concludes, our true value lies in asking the right questions, crafting clear specs, and safeguarding quality. Keep the horsepower running; stay firmly in the pilot’s seat.

What was your first “Whoa” moment with AI? Drop a comment—I’d love to hear!

Daily Links: Tuesday, Jul 29th, 2025

Hey there! In my latest post, I dive into some nifty resources. First, I share a useful link for comparing the pricing of various LLM APIs like OpenAI GPT-4 and more. Plus, you’ll get a kick out of my adventure hacking my washing machine—it’s where the big words find their home. Lastly, check out how to upscale images using generative adversarial neural networks, CSI-style!

AI Is Fueling a Fake Content Flood — Even People You Know Can Be Caught

In the past week, at least two people close to me unknowingly reshared fake content on Facebook. These aren’t people who fall for chain emails or post conspiracy theories—they’re thoughtful, curious, and fairly tech-savvy. But that’s the reality now: it’s getting harder to tell what’s real online, even for people who usually know better.

The reason? AI is making it fast, cheap, and easy to generate fake stories, headlines, graphics, and even entire videos. And bots are spreading it all before we even realize it.

Take a moment to watch this clip from Rachel Maddow on MSNBC:
https://www.msnbc.com/rachel-maddow/watch/maddow-debunks-weird-fake-news-a-i-slop-stories-about-her-and-msnbc-infect-social-media-243601477992

Whether or not you’re a Maddow fan is beside the point. This segment shows how AI-generated nonsense—fake news stories, bot-written posts, and junk links—are showing up in our feeds, using her name and likeness to push made-up narratives. These aren’t even deepfakes. They’re low-effort, high-impact content designed to manipulate, outrage, and spread like wildfire.

Why People Fall for It

Here’s the tricky part: fake content doesn’t look fake anymore. Logos are copied, images are AI-generated, and the writing sounds just believable enough. AI tools are trained to mimic real news formats, which means many of the visual cues we used to rely on—like headlines, layout, or even tone—can’t be trusted the same way.

Add to that how fast we all scroll, how emotionally charged most social feeds are, and how much trust we put in content shared by people we know… and you’ve got a recipe for misinformation.

What You Can Do

I’m still a believer in AI’s potential, but I’m also realistic about how it’s being used right now. If you’re on social media, you need to assume you’ll be exposed to fake content—because you already have been.

Here are a few habits that help:

  • Pause before you share. If something triggers a strong reaction, that’s a good time to stop and investigate.
  • Check the source. Is it a reputable outlet? Does the link go where it says it does?
  • Reverse image search. Tools like Google Lens can help identify whether a photo has been altered or recycled.
  • Cross-check. If no one else is reporting it, there’s probably a reason.

Fake content is cheap. Your attention—and trust—are not. Stay sharp out there.

If this post helps even one person slow down before clicking “share,” it was worth writing. Let’s keep each other honest.

Rewiring AI: Putting Humans Back in the Loop

I’ll admit it—I used to love the promise of “one-click magic” in my observability dashboard. Who doesn’t want the AI to just fix that pager alert for you at 2 AM? But after reading Stop Building AI Tools Backwards by Hazel Weakly, I’ve come around to a stark realization: those “auto” buttons are exactly what’s hollowing out our edge as practitioners.

Here’s the thing—I’m a firm believer that we learn by doing, not by watching. Cognitive science calls it retrieval practice: you solidify knowledge only when you actively pull it from your own brain. Yet most AI assistants swoop in, do the work, and leave you wondering what just happened. It’s like teaching someone to bake by baking the cake for them. Fun for a minute, but no one actually masters the recipe.

Instead, imagine an AI that behaves like an “absent-minded instructor”—one who nudges you through each step of your incident playbook without ever taking the wheel. Using the author’s EDGE framework, it would:

  1. Explain by surfacing missing steps (“Have you considered rolling back that deploy?”), not just offering “click to fix” tooltips.
  2. Demonstrate with a 15-second animation of how to compare time ranges in your monitoring UI—turning your rough query into the exact syntax you need.
  3. Guide by asking Socratic questions (“What trace IDs have you checked so far?”), ensuring you articulate your plan instead of mindlessly pressing “Continue.”
  4. Enhance by watching your actions and suggesting incremental shortcuts (“I noticed you always filter by five-minutes-pre-alert—shall I pin that view next time?”).

Every interaction becomes a micro-lesson, reinforcing your mental models rather than eroding them.

I’ve started riffing on this idea in my own workflow. When I review pull requests, I ask our AI bot not to rewrite the code for me, but to quiz me: “What edge cases might this new function miss?” If I can’t answer, it highlights relevant docs or tests. Suddenly, I’m more prepared for production bugs—and I actually remember my review process.

What really blew me away in Stop Building AI Tools Backwards was the emphasis on cumulative culture—the fact that real innovation happens when teams iterate together, standing on each other’s shoulders. By capturing each developer’s on-the-job recalls and refinements, AI tools can become living archives of tribal knowledge, not just glorified search bars.

Of course, building these “human-first” experiences takes more thought than slapping an “Auto Investigate” button on your UI. But the payoff is huge: your team retains critical reasoning skills, shares best practices organically, and feeds high-quality data back into the system for ever-smarter suggestions.

So next time you’re tempted to automate away a few clicks, ask yourself: am I strengthening my team’s muscle memory—or erasing it? If you want to see how to do AI tooling the right way, check out Stop Building AI Tools Backwards and let’s start rewiring our interfaces for collaboration and growth.

Read the full article here: Stop Building AI Tools Backwards.

Riding the AI Wave: Why Marketing Pros Must Pivot or Perish

I came across Maarten Albarda’s electrifying piece in the latest BoSacks newsletter, originally published on MediaPost: “AI Is Not The Future — It Is Here To Take Your Job” (https://www.mediapost.com/publications/article/407506/ai-is-not-the-future-it-is-here-to-take-your-jo.html?edition=139243). Eric Schmidt’s warning that AI could elbow aside programmers, mathematicians, and entire marketing teams in mere months isn’t sci-fi—it’s next quarter’s boardroom debate. Here’s why embracing AI now feels more like grabbing a lifeboat than steering into a storm.

From where I sit, the real magic (and madness) lies in AI’s leap from “helpful chatbot” to “autonomous strategist.” Imagine a system that doesn’t just draft your ad copy but plans the campaign, allocates budget, and optimizes in real time. That’s not some distant beta test—it’s happening. We’re talking productivity boosts economists haven’t even charted yet. And if you’re thinking, “Nah, that’s years away,” Schmidt’s blistering timeline—full automation of coding tasks within months, general intelligence in 3–5 years—is a gut-check you can’t ignore.

So, what do you do? First, audit your playbook. Map every repetitive task and ask: “Could an algorithm do this faster (and cheaper) than my intern?” Spoiler: the answer’s often “yes.” Next, retool your team for human-only superpowers—ethical oversight, pattern-breaking creativity, and relationship-building that no AI can fake. Finally, make AI fluency part of your culture. A five-minute daily demo, a lunchtime “what’s new” session, even AI peer groups—whatever it takes to demystify the tech and keep curiosity front and center.

Every revolution creates winners and losers. If you lean into AI as a teammate—albeit a supercharged one—you’ll surf this wave instead of wiping out. And trust me, that’s way more fun than reinventing the agency model on the fly while your competitors pull ahead.

Architecting Belief Change: 5 Structural Strategies to Influence Your Network

I recently read the article Why Facts Don’t Change Minds in the Culture Wars—Structure Does, and it blew open how we—and our organizations—can actually shift the perspectives of friends, followers, or customers. Here’s what I’m taking away, and how you can turn these insights into action:


1. Stop Tossing Facts Into the Wind

I used to think that piling up research studies and statistics on my blog would win people over. But truth is, facts are like bullets bouncing off a bunker if you haven’t mapped its blueprints. Instead, start by sketching your audience’s belief “cathedral.” What are their core assumptions—those big, load-bearing ideas they simply won’t question? What stories and symbols hold up those walls? Once you know the beams, you can reinforce or gently rewire them.

Practical step: Run a quick survey or talk directly with five key supporters. Ask: “What do you think is non-negotiable about X?” Their answers reveal your structural targets.


2. Reinforce Edges, Don’t Just Drill Nodes

Let’s say you want customers to embrace a more sustainable product line. Don’t just preach “environmental doom and gloom” (attacking a node) or even “buy this eco-friendly widget” (weak edges). Instead, weave your message into the narratives they already live by—maybe it’s “smart saving,” “community pride,” or “healthy family.” Show how your product sits at the intersection of these values, tying together multiple threads in their mental graph.

Practical step: Create a mini-campaign that combines user stories, local events, and social proof—each element reinforcing several values at once (cost-saving + community + health).


3. Use Storytelling as Structural Glue

Stories are the mortar between belief bricks. A single well-chosen anecdote can bind facts into an emotionally resonant whole. When a follower sees themselves in your story, their brain builds new connections that facts alone can’t. So craft narratives around real people: a customer who saved money and felt proud of helping the planet, or a community that rallied around a shared vision of a healthier tomorrow.

Practical step: Interview a satisfied customer on video. Don’t lead with features—lead with their challenge, the small doubts they had, and the moment everything clicked. Then share it everywhere.


4. Lean Into Micro-Moments & Rituals

Beliefs stick when they become part of daily habits. That’s why every cathedral had its morning prayers and rituals. For your brand or cause, design simple rituals—like a weekly “green tip” email, a monthly community cleanup, or a daily social-media prompt—that gently reinforce your core connections. Over time, these tiny bursts of engagement become internalized pathways in people’s minds.

Practical step: Launch a “Tip Tuesday” series: each week, share one easy eco-hack that ties back to your product. Encourage followers to reply with their results—social proof becomes peer reinforcement.


5. Watch for Structural Attacks—and Be Ready to Repair

Just as adversaries can sever edges (e.g., “This product is a scam”) or undermine nodes (e.g., “Sustainability is just a marketing gimmick”), you need a rapid-response toolkit. Monitor chatter, correct misinformation before it festers, and when you spot a gap, plug it with fresh stories or data that shore up the weakened link.

Practical step: Set up a simple alert (Google Alerts, social-listening tool) for your key themes. When negative chatter spikes, respond with a customer story, an expert quote, or a quick Q&A video.


Changing minds isn’t about volume—it’s about architecture. By mapping your audience’s mental blueprints, reinforcing multiple connections at once, and embedding your message in stories and rituals, you’ll build a belief structure your friends, followers, or customers can actually inhabit. Give it a try, and watch your ideas take root.

Seeing Is Believing: Visual-First Retrieval for Next-Gen RAG

I’ve been neck-deep in the world of Retrieval-Augmented Generation (RAG) lately, wrestling with brittle OCR chains and garbled tables, when along comes Morphik’s “Stop Parsing Docs” post to slap me straight: what if we treated PDFs like images instead of mangling them to death?

Here’s the gist—no more seven-stage pipelines that bleed errors at every handoff. Instead, Morphik leans on the ColPali Vision-LLM approach:

  1. Snap a high-res screenshot of each page
  2. Slice it into patches, feed through a Vision Transformer + PaliGemma LLM that “sees” charts, tables, and text in one go
  3. Late-interaction search across those patch embeddings to find exactly which cells, legend entries, or color bars answer your query

The magic shows up in the benchmarks: traditional OCR-first systems plateau around 67 nDCG@5, but ColPali rockets to 81—and Morphik’s end-to-end integration even nails 95.6% accuracy on tough financial Q&As. That means instead of hunting through mangled JSON or worrying about chunk boundaries, your query “show me Q3 revenue trends” pinpoints both the table figures and the matching uptick in the adjacent bar chart—no parsing required.

Why It Matters (and How They Made It Fast)

You might be thinking, “Cool, but Vision models are slow, right?” Morphik thought so too—and fixed it. By layering in MUVERA’s single-vector fingerprinting and a custom vector database tuned for multi-vector similarity, they shrank query times from 3–4 seconds to a blistering ~30 ms. Now you get visual-first retrieval that’s both precise and production-ready.

A Techie Takeaway

  • Patch-level Embeddings: Preserve spatial relations by keeping each grid cell intact.
  • Late Interaction: Match query tokens against each patch embedding, then aggregate—no early pooling means no lost context.
  • Fingerprinting via MUVERA: Collapse multi-vector scores into a single vector for blazing fast lookups.

Where You Could Start

  1. Prototype a visual RAG flow on your docs—grab a handful of invoices or spec sheets and spin up a ColPali demo.
  2. Run nDCG benchmarks against your current pipeline. Measure those gains, because numbers don’t lie.
  3. Triage edge cases—test handwriting, non-English text, or wildly different layouts to see where parsing still has a leg up.

This shift isn’t just a neat trick; it’s a philosophical turn. Documents are inherently visual artifacts—charts and diagrams aren’t decorations, they’re the data. By preserving every pixel, you sidestep the endless game of parsing whack-a-mole.

If you’ve ever lost hours debugging a missing cell or crushed a pie chart into random percentages, give “Stop Parsing Docs” a read and rethink your RAG strategy. Your sanity (and your users) will thank you.