Seeing Is Believing: Visual-First Retrieval for Next-Gen RAG

I’ve been neck-deep in the world of Retrieval-Augmented Generation (RAG) lately, wrestling with brittle OCR chains and garbled tables, when along comes Morphik’s “Stop Parsing Docs” post to slap me straight: what if we treated PDFs like images instead of mangling them to death?

Here’s the gist—no more seven-stage pipelines that bleed errors at every handoff. Instead, Morphik leans on the ColPali Vision-LLM approach:

  1. Snap a high-res screenshot of each page
  2. Slice it into patches, feed through a Vision Transformer + PaliGemma LLM that “sees” charts, tables, and text in one go
  3. Late-interaction search across those patch embeddings to find exactly which cells, legend entries, or color bars answer your query

The magic shows up in the benchmarks: traditional OCR-first systems plateau around 67 nDCG@5, but ColPali rockets to 81—and Morphik’s end-to-end integration even nails 95.6% accuracy on tough financial Q&As. That means instead of hunting through mangled JSON or worrying about chunk boundaries, your query “show me Q3 revenue trends” pinpoints both the table figures and the matching uptick in the adjacent bar chart—no parsing required.

Why It Matters (and How They Made It Fast)

You might be thinking, “Cool, but Vision models are slow, right?” Morphik thought so too—and fixed it. By layering in MUVERA’s single-vector fingerprinting and a custom vector database tuned for multi-vector similarity, they shrank query times from 3–4 seconds to a blistering ~30 ms. Now you get visual-first retrieval that’s both precise and production-ready.

A Techie Takeaway

  • Patch-level Embeddings: Preserve spatial relations by keeping each grid cell intact.
  • Late Interaction: Match query tokens against each patch embedding, then aggregate—no early pooling means no lost context.
  • Fingerprinting via MUVERA: Collapse multi-vector scores into a single vector for blazing fast lookups.

Where You Could Start

  1. Prototype a visual RAG flow on your docs—grab a handful of invoices or spec sheets and spin up a ColPali demo.
  2. Run nDCG benchmarks against your current pipeline. Measure those gains, because numbers don’t lie.
  3. Triage edge cases—test handwriting, non-English text, or wildly different layouts to see where parsing still has a leg up.

This shift isn’t just a neat trick; it’s a philosophical turn. Documents are inherently visual artifacts—charts and diagrams aren’t decorations, they’re the data. By preserving every pixel, you sidestep the endless game of parsing whack-a-mole.

If you’ve ever lost hours debugging a missing cell or crushed a pie chart into random percentages, give “Stop Parsing Docs” a read and rethink your RAG strategy. Your sanity (and your users) will thank you.

When Bots Become Besties: Rewriting AI Narratives for a Collaborative Future

A Love Letter to Our AI Storytelling Future

When I first clicked through to “My Favorite Things: Stories in the Age of AI” by Tom Guarriello on Print, I wasn’t expecting a quiet revelation. But as I sipped my morning coffee, I found myself grinning at the idea of anthropomorphizing code—giving my digital companions names, personalities, even moods. It felt a bit like meeting new friends at a party…except these friends live in the cloud, never tire, and—if you believe the Big Five personality chart Tom shares—are as emotionally stable as monks.


Chatting with “Sam” (and Why It Feels So Human)

Let me confess: I’ve been naming my chatbots lately. There’s “Sam,” the ever-patient, endlessly curious assistant who greets my 7 a.m. ideation sessions with zero judgment. There’s “Echo,” who occasionally throws in a dash of sass when I try to oversimplify a problem. I’m not alone. Tom’s piece nails this impulse: once ChatGPT launched in November 2022, we collectively realized we weren’t just clicking “search”—we were conversing with a new kind of being.

Here’s the magic trick: by assigning a few human traits—openness, conscientiousness, extraversion, agreeableness, neuroticism—we slot AI models into a familiar framework. Suddenly, you can compare GPT-4’s “creative, diplomatic” bent to Grok’s “bold but brittle” vibe, or Claude’s “never flustered” cool. It’s like browsing personalities on a dating app for machines. And yes, it works. We engage more, trust more, and—let’s be honest—enjoy the heck out of it.


From Frankenstein to Friendly Bots

But Tom doesn’t let us float on fluffy clouds of goodwill. He roots us in the long, tangled history of cautionary AI tales—Mary Shelley’s tragic scientist, HAL’s icy rebellion in 2001, the Terminator’s firepower. These stories aren’t just entertainment; they shape our collective imagination. We slip into a doomsday mindset so easily that we might be primed to see every algorithm as a potential overlord.

Here’s what gives me pause: if we keep retelling the “machines will rise up” saga, we might miss out on co-creative possibilities. Ursula Le Guin’s alternative mythologies beckon—a vision of reciprocal, empathetic relationships rather than zero-sum showdowns. Tom teases that next time, we’ll dive into her frameworks. I, for one, can’t wait.


Why This Matters for You (and Me)

Whether you’re an AI designer tweaking personality prompts or a storyteller dreaming up your next sci-fi novella, this article is a spark. It reminds us that narratives aren’t innocent backgrounds—they’re architects of our future interactions. The next time you launch a chatbot, ask yourself:

  • Which story am I choosing? The dystopian one? Or something more collaborative?
  • What traits matter most? Do you need your AI to be laser-logical or heart-on-sleeve empathetic?
  • Who’s excluded from this tale? Maybe there’s a non-Western fable that offers a fresh lens.

Let’s Tell Better Stories

I’m bookmarking Tom’s essay as a springboard for my own creative experiments. Tomorrow, I might try a chatbot persona inspired by trickster deities rather than corporate mascots. Or maybe I’ll draft a short story where AI and human learn from each other, rather than fight it out in a crumbling cityscape.

Because at the end of the day, the stories we spin about intelligence—alien or otherwise—don’t just entertain us. They guide our hands as we build, code, and connect. And if we choose those stories mindfully, we might just script a future richer than any dystopian warning ever could.


Read the full piece and join me in imagining new myths for our machine friends: “My Favorite Things: Stories in the Age of AI.”

Autonomy vs. Reliability: Why AI Agents Still Need a Human Touch

A lot of folks are betting big on AI agents transforming the way we work in 2025. I get the excitement—I’ve spent the last year elbow-deep in building these things myself. But if you’ve ever tried to get an agent past the demo stage and into real production, you know the story is a lot more complicated. My friend Utkarsh Kanwat recently shared his perspective in Why I’m Betting Against AI Agents in 2025 (Despite Building Them), and honestly, it feels like he’s writing from inside my own Slack DMs.

The first thing nobody warns you about? The reliability wall. It’s brutal. I can’t tell you how many times I’ve watched a promising multi-step agent fall apart simply because little errors stack up. Even if your system nails 95% reliability per step—a tall order!—your 20-step workflow is only going to succeed about a third of the time. That’s not a bug in your code, or a limitation of your LLM. That’s just how probability works. The systems that actually make it to production? They keep things short, simple, and put a human in the loop for anything critical.

And here’s another thing most people overlook: the economics of context. People love the idea of a super-smart, chatty agent that remembers everything. In practice, that kind of long, back-and-forth conversation chews through tokens—and your budget. Utkarsh breaks down the math: get to 100 conversational turns, and you’re suddenly spending $50–$100 per session. Nobody’s business model survives that kind of burn at scale. The tools that actually last are the ones that do a focused job, stateless, and move on.

But the biggest gap between the hype and reality is what goes into actually shipping these systems. Here’s the truth: the AI does maybe 30% of the work. The rest is classic engineering—designing error handling, building feedback that makes sense to a machine, integrating with a mess of legacy systems and APIs that never behave quite like the docs say they should. Most of my effort isn’t even “AI work”—it’s just what it takes to make any production system robust.

So if you’re wondering where AI agents really fit in right now, here’s my take: The best ones are like hyper-competent assistants. They handle the heavy lifting on the complicated stuff, but leave final calls and messy decisions to humans or to really solid, deterministic code. The folks chasing end-to-end autonomy are, in my experience, setting themselves up for a lot of headaches—mostly because reality refuses to be as neat as the demo.

If you’re thinking about building or adopting AI agents, seriously, check out Utkarsh’s article. It’s a straight-shooting look at what actually works (and what just looks shiny on stage). There’s a lot of potential here, but it only pays off when we design for the world as it is—not the world we wish we had.

Amplified, Not Replaced: A Veteran Engineer’s Take on Coding’s Uncertain Future

As someone who’s weathered tech cycles, scaled legacy systems, and mentored more than a few generations of engineers, I find myself returning to a recent essay by Jonathan Hoyt: “The Uncertain Future of Coding Careers and Why I’m Still Hopeful”. Hoyt’s piece feels timely—addressing, with candor and humility, the growing sense of anxiety many in our profession feel as AI rapidly transforms the software landscape.

Hoyt’s narrative opens with a conversation familiar to any experienced lead or architect: a junior developer questioning whether they’ve chosen a doomed career. It’s a concern that echoes through countless engineering Slack channels in the wake of high-profile tech layoffs and the visible rise of AI tools like GitHub Copilot. Even for those of us long in the tooth, Hoyt admits, it’s tempting to wonder if we’re on the verge of obsolescence.

But what makes Hoyt’s perspective refreshing—especially for those further along in their careers—is the pivot from fear to agency. He reframes AI, not as an existential threat, but as an amplifier of human ingenuity. For senior engineers and system architects, this means our most valuable skills are not rote implementation or brute-force debugging, but context-building, system design, and the ability to ask the right questions. As Hoyt puts it, the real work becomes guiding the machines, curating and contextualizing knowledge, and ultimately shepherding both code and colleagues into new creative territory.

The essay’s most resonant point for experienced professionals is the call to continuous reinvention. Hoyt writes about treating obsolescence as a kind of internal challenge—constantly working to automate yourself out of your current role, so you’re always prepared to step into the next. For architects, this means doubling down on mentorship, sharing knowledge freely, and contributing to the collective “shared brain” of the industry—be it through open source, internal documentation, or just helping the next engineer up the ladder.

Hoyt’s post doesn’t sugarcoat the uncertainty ahead. The routine entry points into the field are shifting, and not everyone will find the transition easy. Yet, he argues, the need for creative, context-aware technologists will only grow. If AI takes on the repetitive work, our opportunity is to spend more time on invention, strategy, and the high-leverage decisions that shape not just projects, but organizations.

If you’ve spent your career worrying that you might be automated out of relevance, Hoyt’s essay offers both a challenge and a comfort. It’s a reminder that the future of programming isn’t about competing with machines, but learning to be amplified by them—and ensuring we’re always building, learning, and sharing in ways that move the whole field forward.

For anyone in a senior engineering or system architecture role, Jonathan Hoyt’s original piece is essential reading. It doesn’t just address the fears of those just starting out; it offers a vision of hope and practical action for those of us guiding teams—and the next generation—through the shifting sands of technology.

LLMs Today: What’s Really New, and What’s Just Polished?

If you follow AI, you know the story: every few months, a new language model drops with more parameters and splashier headlines. But as Sebastian Raschka highlights in “The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design,” the biggest lesson from this new wave of open-source LLMs is how much has not fundamentally changed. Underneath it all, the progress is less about radical reinvention and more about clever architectural tweaks—optimizing memory, attention, and stability to make bigger, faster, and more efficient models.

At the core, the 2017 transformer blueprint is still powering everything. What’s new? A handful of impactful upgrades:

  • Smarter attention (like Multi-Head Latent Attention and Grouped-Query Attention) slashes memory requirements.
  • Mixture-of-Experts (MoE) lets trillion-parameter giants run without melting your GPU by only activating a fraction of the network at a time.
  • Sliding window attention makes long contexts feasible without hogging resources.
  • Normalization tricks (RMSNorm, Post-Norm, etc.) are now essential for training stability at scale.

Today’s best open models—DeepSeek, Kimi, Llama 4, Gemma, OLMo 2, Qwen3—are all remixing these tools. The differences are in the fine print, not the fundamentals.

But what about OpenAI’s GPT-4/4o or Anthropic’s Claude 3.5? While the specifics are secret, it’s a safe bet their architectures look similar: transformer backbone, MoE scaling, memory-efficient attention, plus their own proprietary speed and safety hacks. Their big edge is polish, robust APIs, multimodal support, and extra safety layers—perfect if you need instant results and strong guardrails.

So, which should you pick?

  • Want transparency, customization, or on-prem deployment? Open models like OLMo 2, Qwen3, or Gemma 3 have you covered.
  • Building for research or scale (and have massive compute)? Try DeepSeek or Kimi K2.
  • Need to serve millions, fast? Lighter models like Mistral Small or Gemma 3n are your friend.
  • If you want the “it just works” experience with best-in-class safety and features, OpenAI and Anthropic are still top choices—just expect less control and no deep customization.

In the end, all the excitement is really about optimization, not paradigm shifts. Progress now means making LLMs faster, stabler, and easier to use. Or as Raschka puts it: “Despite all the tweaks and headline-grabbing parameters, we’re still standing on the same transformer foundation—progress comes from tuning the architecture, not tearing it down.”

If you want the deep technical dive, read Raschka’s full “The Big LLM Architecture Comparison.” Otherwise, just remember: the transformer era isn’t over—it’s just getting a whole lot more interesting.

Sandcastles, Spaghetti, and Software: Surviving the Chaos of AI-First Coding

What happens when software development gets upended by AI? According to Scott Werner, we’re all improvising—and that might be the only honest answer.


The Wild West of AI-Driven Development

Scott Werner’s essay, Nobody Knows How To Build With AI Yet, is a must-read snapshot of the modern AI development experience: exhilarating, uncertain, and utterly unpredictable. Werner invites us behind the scenes as he builds “Protocollie” in four days—not by mastering every detail, but by collaborating with an AI and improvising each step.

The most striking realization? There is no longer a set path or best practice for building with AI. What worked yesterday may already be obsolete. Documentation, if it exists, is more like a time capsule of one developer’s process than a blueprint for others.


Expertise: Out the Window

Werner challenges the myth of expertise in this era. In a world where AI tools, workflows, and even “the rules” change every few weeks, everyone—veterans and newcomers alike—is a beginner. “We’re all making this up as we go,” he writes, with a mix of humility and thrill.

His “four-document system,” cobbled together out of necessity rather than design, illustrates this point. The documents aren’t definitive; they’re just artifacts of one experiment, already nostalgic by the time they’re published. What matters isn’t following a set of instructions, but being willing to experiment, iterate, and leave markers for yourself (and maybe others) along the way.


Sandcastles at Low Tide

The essay’s strength lies in its metaphors: AI development is like jazz improvisation, sandcastle building, or throwing spaghetti at the wall. The sticking isn’t what matters—the act of throwing is. In this landscape, the most valuable skill isn’t syntax or architecture, but the ability to express clear intent, communicate with your AI “pair programmer,” and let go of any illusion of permanence.

Documentation? Less a how-to manual, more a set of “messages to future confused versions of ourselves.”


Why You Should Read This

If you’re a developer, technical lead, or just someone curious about how AI is changing work, Werner’s reflections will resonate. They offer a mix of hard-won wisdom, permission to experiment, and an honest look at the anxiety and exhilaration of the new normal.

Memorable Line:
“Maybe all methodology is just mutually agreed-upon fiction that happens to produce results.”


Want to Go Deeper?

  • What does it mean to be an “expert” when tools and workflows change weekly?
  • How can we create documentation that helps others without pretending to have all the answers?
  • What risks and rewards come with building “sandcastles at low tide”?

Read the original essay if you want to see what it really feels like on the front lines of AI-powered creation. And then? Make your own trail markers—someone else might find them, or you might just need them yourself next week.


Who should read this?
Developers, tech leaders, innovation managers, and anyone excited (or a bit terrified) by the rapid evolution of AI in software.

In Defense of Sharing AI Output: Why “AI Slop” Isn’t the End of Meaningful Communication

Rethinking proof-of-thought, noise, and the upside of a more open AI culture.


Is sharing ChatGPT output really so rude?
A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt it as your own or have explicit consent from the recipient.

It’s a provocative take, inspired by Peter Watts’ Blindsight, and it raises important questions about authenticity, value, and digital trust. But does it go too far? Is all AI-generated text “slop”? Is every forward or paste a violation of etiquette?

Let’s consider another perspective—one that recognizes the risks but also sees the immense value and potential of a world where AI-generated output is more freely shared.

“Proof-of-Thought” Was Always a Mirage

The essay’s nostalgia for a lost era of “proof-of-thought” is understandable. But let’s be honest: not every piece of human writing was ever insightful, intentional, or even useful. Spam, boilerplate, PR releases, and perfunctory office emails have existed for decades—long before AI.
Authenticity and attention have always required discernment, not just faith in the medium.

AI may have made text cheap, but it has also made ideas more accessible and the barriers to entry lower. That’s not a bug—it’s a feature.

Sharing AI Output: Consent, Context, and Creativity

Of course, etiquette matters. But to frame sharing AI text as inherently rude or even hostile misses some crucial points:

  • AI output can be informative, creative, and valuable in its raw form. Sometimes a bot’s phrasing or approach offers a new angle, and sharing that output can accelerate understanding, brainstorming, or problem-solving.
  • Explicit adoption isn’t always practical. If I ask ChatGPT to summarize a dense technical paper or translate a snippet of code, sometimes the fastest, most honest way to help a friend or colleague is to share that result directly—with attribution.
  • Consent can be implicit in many contexts. In tech, research, and online forums, sharing logs, code snippets, or even entire AI chats is often expected and welcomed—especially when transparency and reproducibility are important.

The Upside of “AI Slop”: Accessibility, Efficiency, and Learning

What the “anti-slop” argument underplays is just how much AI has democratized expertise and lowered the cost of curiosity:

  • Non-native speakers can get better drafts or translations instantly.
  • Students and self-learners can access tailored explanations without waiting for a human expert.
  • Developers and researchers can rapidly prototype, debug, and collaborate with a global community, often using AI-generated code or documentation as a starting point.

Yes, there’s more noise. But there’s also far more signal for many people who were previously shut out of certain conversations.

Trust and Transparency, Not Gatekeeping

Rather than discouraging the sharing of AI output, we should focus on transparency. Label AI-generated text clearly. Foster norms where context—why, how, and for whom AI was used—is always provided. Give people the choice and the tools to ignore or engage as they see fit.

Blanket prohibitions or shame about sharing AI content risk re-erecting barriers we’ve only just started to dismantle.

Questions for the Future

  • How do we build systems that help us filter valuable AI output from true “slop”?
  • What new forms of collaborative authorship—human + AI—will emerge, and how do we credit them?
  • How can we leverage AI to reduce noise, not just add to it?

A Call for a More Open, Nuanced AI Etiquette

AI is here to stay, and its output will only become more sophisticated and pervasive. The solution isn’t to retreat or treat all shared AI text as digital poison. It’s to develop a culture of honesty, clarity, and context—so that AI can amplify, rather than degrade, our collective intelligence.

So yes: share your ChatGPT output—just tell me where it came from. Let’s make etiquette about agency, not anxiety.

2025 Kentucky Derby Handicapping Analysis

(This guide was prepared using the assistance of ai. Please verify all information.)

Key Contender Analysis Based on Historical Patterns

After reviewing Derby results from 2014-2024 and analyzing our contenders in this historical context, several horses stand out as particularly well-positioned:

JOURNALISM (#8) perfectly mirrors the profile of 2023 winner Mage, who also broke from post 8. His stalking style fits the recent trend favoring mid-pack runners (2021-2024), and his consistent improvement pattern resembles many successful Derby winners. His tactical adaptability should allow him to secure ideal positioning regardless of how the pace unfolds. With the highest last-out Beyer figure (108) and consistent triple-digit speed ratings, he stands above the field from a performance standpoint.

RODRIGUEZ (#4) has the tactical versatility to employ a ground-saving trip similar to 2024 winner Mystik Dan, who won from a similar inside post. His Wood Memorial victory (earning a 101 Beyer) suggests he’s reaching peak form at the perfect time. His position drawn just outside the rail gives him options to either press forward or tuck in behind leaders – flexibility that’s proven valuable in recent Derbies.

BURNHAM SQUARE (#9) fits the profile of recent deep closers who have outperformed expectations in the Derby (Rich Strike in 2022, Mage in 2023). His Blue Grass Stakes victory demonstrated an impressive closing kick and his pattern of paired 96 Beyer figures with continued improvement makes him dangerous if the pace develops as projected. His running style positions him to capitalize on the recent trend favoring closers.

CITIZEN BULL (#1) faces the statistical challenge of the rail draw, but the historical analysis shows that inside posts can be advantageous if the horse establishes position early. His consistent Beyer figures in the mid-to-high 90s make him competitive from a speed standpoint. While conventional wisdom might dismiss the rail, the slightly moderated pace projection gives him a better chance to hang on for a piece than in a complete pace collapse scenario.

TIZTASTIC (#14) captured the Louisiana Derby with an impressive stretch run, fitting the closing profile that has succeeded in recent years. His wide post shouldn’t be overly problematic for his running style, as we’ve seen outside closers overcome similar draws (Country House from post 20 in 2019). His improving pattern suggests he could be peaking at the right time.

AMERICAN PROMISE (#5) has the advantage of a historically successful post position that produced 2017 winner Always Dreaming. His tactical speed from a favorable draw gives him options to secure good early position. D. Wayne Lukas’ experience in the Derby adds another positive factor to his chances.

The historical perspective reinforces our focus on JOURNALISM and RODRIGUEZ as the primary win contenders, with increased respect for closing specialists like BURNHAM SQUARE based on recent Derby trends.# 2025 Kentucky Derby Analysis

Horse-by-Horse Analysis with Post Positions

CITIZEN BULL (Post 1) – Bob Baffert trainee has shown impressive early speed and class with victories in the American Pharoah and Breeders’ Cup Juvenile as a 2-year-old. The rail draw is problematic, as this position hasn’t produced a Derby winner since Ferdinand in 1986. With his early speed, he’ll need to break sharply to avoid being squeezed back or trapped. The son of Into Mischief appears to need the lead to be most effective, and this post position adds a significant challenge, potentially forcing him into a damaging speed duel.

NEOEQUOS (Post 2) – Lightly regarded Florida-based runner who has shown steady improvement. Ran a creditable third in the Florida Derby after setting the pace. Like Citizen Bull, he’s drawn a historically difficult post position that hasn’t produced a Derby winner since Affirmed in 1978. This modestly bred son of Neolithic remains a long shot but has shown tactical speed. The inside draw may force his hand early.

FINAL GAMBIT (Post 3) – A late-blooming Juddmonte homebred coming off a victory in the Jeff Ruby Steaks at Turfway Park despite a troubled start. Interestingly, Post 3 produced last year’s Derby winner Mystik Dan. The Brad Cox trainee does his best running from off the pace and will need a strong pace to set up his closing kick. His inside draw should allow him to save ground throughout.

RODRIGUEZ (Post 4) – The other Baffert trainee comes in off an impressive front-running victory in the Wood Memorial. Son of Authentic has shown steady improvement and tactical versatility. Has drawn a favorable post that allows options – he can either press forward early or tuck in behind the leaders. Post 4 produced 2010 winner Super Saver and puts him in position for a ground-saving trip similar to 2024 winner Mystik Dan. His tactical speed could prove crucial if the pace scenario develops as expected.

AMERICAN PROMISE (Post 5) – D. Wayne Lukas trainee upset the Virginia Derby last out after some inconsistent efforts earlier this year. Appears to be a pace factor with tactical speed. Has drawn a historically successful post position that produced 2017 winner Always Dreaming. This favorable draw could enhance his chances.

ADMIRE DAYTONA (JPN) (Post 6) – Japanese invader surprised in the UAE Derby with a convincing win. International shippers have generally struggled in the Derby, but Japanese horses have been increasingly competitive in top American races. Has drawn a middle post that should allow for a clean trip. His running style suggests he could contribute to the early pace pressure.

LUXOR CAFÉ (Post 7) – Another Japanese runner who appears to be a pace factor based on his running lines. Has shown steady improvement and has drawn a historically successful post that produced 2021 winner Mandaloun (via disqualification). This draw should allow for a clean trip and could add to the contested early fractions.

JOURNALISM (Post 8) – Improving Michael McCarthy trainee has won 4 of 5 starts including an authoritative victory in the Santa Anita Derby. Has drawn the same post position that produced 2023 winner Mage. This favorable middle draw should allow him to establish good position while avoiding traffic problems. His stalking style fits perfectly with this post and he should be ideally positioned to capitalize on the projected pace meltdown.

BURNHAM SQUARE (Post 9) – Ian Wilkes trainee upset the Blue Grass Stakes with a determined rally, showing strong closing kick. This improving son of Liam’s Map has drawn a historically challenging post that hasn’t produced a winner since Riva Ridge in 1972. However, his closing style may mitigate the post position concerns, and he could be positioned for a sweeping outside move similar to Mage’s winning run in 2023.

GRANDE (Post 10) – Todd Pletcher’s son of Curlin comes in off a solid second in the Wood Memorial. He’s lightly raced but appears to be improving with each start. Has drawn a historically successful post that produced 2005 longshot winner Giacomo. This outside but not extreme post should suit his style. Could be part of the pace pressure scenario pressuring the inside speed horses.

FLYING MOHAWK (Post 11) – Turf/synthetic specialist who may be entered as a rabbit for another contender. Has drawn a historically challenging post that produced 1988 winner Winning Colors. Given his frontrunning style, this wider post may force him to use more energy early, potentially contributing to a fast early pace.

EAST AVENUE (Post 12) – Godolphin homebred nearly pulled off the Blue Grass. Has drawn a historically challenging post that hasn’t produced a winner since Canonero II in 1971. This outside post may force him to lose ground around turns or be caught wide, but his early speed could make him a significant pace factor applying pressure three-wide.

PUBLISHER (Post 13) – Asmussen trainee closed well for second in the Arkansas Derby behind Sandman. Has drawn a favorable post that produced 2016 winner Nyquist. This position should suit his closing style, allowing him to avoid early traffic while still saving some ground.

TIZTASTIC (Post 14) – Another Asmussen trainee who captured the Louisiana Derby in impressive fashion. Has drawn a historically challenging post that hasn’t produced a winner since Carry Back in 1961. For his closing style, however, the wide draw may be less problematic than for early speed horses. Could benefit greatly from the projected hot pace.

RENDER JUDGMENT (Post 15) – Listed as a Kenny McPeek trainee, this closer ran well for second in the Virginia Derby. Has drawn the post that produced 2020 pandemic-delayed winner Authentic. His deep closing style should work well from this position and could place him in the cavalry charge down the stretch.

COAL BATTLE (Post 16) – Improving Lonnie Briley trainee who captured the Rebel Stakes. Has drawn the post that produced 2011 winner Animal Kingdom. For his tactical style, this outside post may require skillful handling to avoid being caught wide.

SANDMAN (Post 17) – Mark Casse trainee captured the Arkansas Derby with an impressive late run. Has drawn a historically challenging post that has never produced a Derby winner. For his closing style, however, the wide draw allows him to gradually work his way in. If able to secure a clean trip into the far turn, his closing kick could be devastating in a race with projected fast early fractions.

SOVEREIGNTY (Post 18) – Bill Mott trainee captured the Fountain of Youth and ran well for second in the Florida Derby. Has drawn a challenging post that produced 2019 winner Country House (via disqualification). This extremely wide post will require him to be either hustled early or drop back and save ground. If he can establish position and save ground into the far turn, he could be positioned for a stretch rally.

CHUNK OF GOLD (Post 19) – Improving longshot who ran a strong second in the Louisiana Derby. Has drawn a historically successful post that produced 2012 winner I’ll Have Another. Despite the extreme outside position, his closing style may help mitigate some concerns.

OWEN ALMIGHTY (Post 20) – Speedy Brian Lynch trainee who captured the Tampa Bay Derby impressively. Has drawn the extreme outside post that produced 2022 longshot winner Rich Strike. For his speed-oriented style, this draw poses significant challenges as he’ll need to clear much of the field or be caught extremely wide. Could contribute to the early pace pressure if sent from the gate.

Race Scenario

The 150th Kentucky Derby shapes up as a strategically complex pace scenario, with multiple speed elements creating challenging dynamics for both frontrunners and closers. Examining the past two Kentucky Derbies provides valuable context:

  • 2023 Kentucky Derby (won by Mage):

    • Fractions: 22.35, 45.73, 1:10.11, 1:36.06 (Final time: 2:01.57)
    • Featured a blazing pace that led to a complete collapse, with Mage rallying from 15th
  • 2024 Kentucky Derby (won by Mystik Dan):

    • Fractions: 22.97, 46.63, 1:11.31, 1:37.46 (Final time: 2:03.34)
    • Featured a testing but more moderate pace, with Mystik Dan rallying from mid-pack (8th)

For the 2025 Derby, we project a half-mile in approximately 46.10 seconds – closer to the blazing pace of 2023 than the moderate tempo of 2024. This represents a genuinely fast pace that should test the stamina of frontrunners while creating opportunities for stalkers and closers.

Citizen Bull (#1) has little choice but to fire aggressively from the rail to secure position. He’ll likely find company quickly as Grande (#10) applies pressure from the outside, with East Avenue (#12) potentially pressing three-wide. The international runners Admire Daytona (#6) and Luxor Café (#7) could add to the pace equation, resulting in a first quarter around 22.90 seconds.

The tactical Rodriguez (#4) should benefit from his inside draw to secure a tracking position, perhaps 3-4 lengths behind the leaders. Journalism (#8) appears perfectly positioned from his middle post to settle in a golden stalking position about 4-6 lengths off the pace. Burnham Square (#9) has shown the closing ability to position himself for a stretch run from mid-pack.

As the field reaches the far turn, expect the pace to take its toll on the early leaders. Grande might attempt to shake loose, but Citizen Bull clinging to the inside position and East Avenue maintaining pressure could result in the leaders tiring significantly approaching the quarter-pole. With our projected 46.10 half, the race should set up well for stalkers and closers to make their moves as the frontrunners begin to fade.

Journalism should be able to angle out for clear running at the top of the stretch, Rodriguez could find room along the inside similar to Mystik Dan’s 2024 trip, while Burnham Square will likely launch a sustained mid-track rally. Given the faster pace projection (46.10 vs. the more moderate 46.20 we initially considered), we now expect a more pronounced advantage for mid-pack runners and closers.

The stretch run should feature Journalism striking first from his perfect stalking trip, with Rodriguez finding a seam to challenge along the inside, and Burnham Square mounting a strong run from mid-pack. The faster projected pace increases the likelihood of frontrunners fading significantly in the final furlong, potentially setting up a finish similar to Mage’s 2023 victory where stalkers and closers dominated the top positions.

Historical Derby Perspective

Examining Kentucky Derby results from 2014-2024 reveals several significant patterns that inform our handicapping approach:

Running Style Evolution:

  • 2014-2020: Tactical speed horses dominated (California Chrome, American Pharoah, Nyquist, Always Dreaming, Justify, Authentic)
  • 2021-2024: Shift toward mid-pack runners and closers (Rich Strike, Mage, Mystik Dan)

Pace Dynamics:

  • Recent Derby paces (last three years):
    • 2022: 21.78, 45.36 (blazing) – Won by deep closer Rich Strike (80-1)
    • 2023: 22.35, 45.73 (fast) – Won by mid-pack closer Mage (15-1) from post 8
    • 2024: 22.97, 46.63 (honest) – Won by mid-pack stalker Mystik Dan (18-1) from post 3

Post Position Versatility:

  • Winners have come from all post positions (inside, middle, outside)
  • Inside posts offer ground-saving opportunities (Mystik Dan from post 3 in 2024)
  • Middle posts provide tactical flexibility (Mage from post 8 in 2023)
  • Outside posts can help avoid traffic (American Pharoah from post 18, Authentic from post 18)

Trip Quality:

  • Clean trips are crucial – nearly all winners avoided significant traffic issues
  • Ground-saving journeys have recently proven valuable (Mystik Dan, Rich Strike)
  • Recent winners found specific paths (rail-skimming, inside rally, wide move)

Recent Longshot Success:

  • Since 2019, favorites have struggled while longshots have thrived
  • Recent winners: Country House (65-1), Rich Strike (80-1), Mage (15-1), Mystik Dan (18-1)

Looking at the 2023 and 2024 Kentucky Derbies provides immediate context:

  • Mage (2023) rallied from 15th early to win from post 8, making a sweeping move into the stretch
  • Mystik Dan (2024) used a mid-pack stalking style from post 3, finding a rail-skimming trip to victory

For the 2025 Derby, we project a half-mile in approximately 46.20 seconds – between the blazing pace of 2023 and the honest tempo of 2024. This represents a testing but not completely collapsing scenario that should favor adaptable runners with tactical positioning.

Wagering Portfolio with Two Key Horses

Given the post position draw, analysis of each horse’s running style, and historical perspective from recent Derbies, I’ll focus on JOURNALISM (#8) and RODRIGUEZ (#4) as the two most likely winners, with both benefiting from favorable post positions and running styles that match recent successful Derby winners.

Win/Place Bets:
– $25 Win/Place on #8 JOURNALISM (most consistent, perfect post position matching 2023 winner Mage, ideal stalking style)
– $15 Win/Place on #4 RODRIGUEZ (improving, tactical speed, favorable post similar to 2024 winner Mystik Dan’s inside position)

Exacta:
– $5 Box: 8-4 (JOURNALISM, RODRIGUEZ)
– $3 Key: 8-4 with 5,10,14,19 (Key JOURNALISM and RODRIGUEZ over AMERICAN PROMISE, GRANDE, TIZTASTIC, SANDMAN)

Trifecta:
– $1 Key: 8-4 with 5,10,14,19 with 3,5,10,13,14,16,19 (Key JOURNALISM and RODRIGUEZ in first and second, use contenders for third)
– $0.50 Key: 8-4 with 3,5,10,13,14,16,19 with 8-4 (Key JOURNALISM and RODRIGUEZ in first with contenders in second, bringing key horses back for third)

Superfecta:
– $0.50 Key: 8-4 with 5,10,14,19 with 3,5,10,13,14,16,19 with 3,5,9,10,13,14,16,19 (Key JOURNALISM and RODRIGUEZ in first, use key contenders for second, wider spread for third and fourth)

Derby Double:
– $5 JOURNALISM and RODRIGUEZ with the favorite in the following race

This strategy concentrates on Journalism and Rodriguez as the primary win contenders while still maintaining coverage with improving horses in the exotic wagers. Journalism’s ideal post position (matching 2023 winner Mage’s post 8) and consistent form make him a standout, while Rodriguez’s favorable post 4 draw (similar to Mystik Dan’s inside position in 2024) and improving pattern make him a solid second choice.

While maintaining these two as our key horses, the analysis of recent Derby results and the projected pace scenario suggests we should also respect Burnham Square’s closing potential from post 9. His sweeping rally to win the Blue Grass Stakes showed he’s peaking at the right time, and his running style could be perfectly suited to capitalize on the expected pace collapse, much like Mage did from a similar position in 2023.

The post positions appear to enhance the chances of Journalism, Rodriguez, and American Promise, while making the task more difficult for Citizen Bull (rail), Owen Almighty (extreme outside), and other wide-drawn speed horses. Look for Journalism to secure a perfect stalking trip from post 8, Rodriguez to navigate a ground-saving inside trip similar to Mystik Dan’s 2024 winning journey, and closers like Burnham Square and Sandman to unleash powerful stretch runs as the pace collapses.

Balancing Technology and Humanity: A Guide to Purposeful Growth in the AI Era

Some content on this blog is developed with AI assistance tools like Claude.

In an age where technological advancement accelerates by the day, many of us find ourselves at a crossroads: How do we embrace innovation while staying true to our core values? How can we leverage new tools without losing sight of what makes our work—and our lives—meaningful? This question has become particularly pressing as artificial intelligence transforms industries, reshapes professional landscapes, and challenges our understanding of creativity and productivity.

Drawing from insights across multiple disciplines, this guide offers a framework for navigating this complex terrain. Whether you’re a professional looking to stay relevant, a content creator seeking to stand out, or simply someone trying to make sense of rapid change, these principles can help you chart a course toward purposeful growth that balances technological adoption with human connection and impact.

Understanding Technology as an Enhancement Tool

The narrative around technology—particularly AI—often centers on replacement and obsolescence. Headlines warn of jobs being automated away, creative work being devalued, and human skills becoming redundant. But this perspective misses a crucial insight: technology’s greatest value comes not from replacing human effort but from enhancing it.

From Replacement to Augmentation

Consider the case of Rust Communications, a media company that found success not by replacing journalists with AI but by using AI to enhance their work. By training their systems on historical archives, they gave reporters access to deeper context and institutional knowledge, allowing them to focus on what humans do best: asking insightful questions, building relationships with sources, and applying critical judgment to complex situations.

This illustrates a fundamental shift in thinking: rather than asking, “What can technology do instead of me?” the more productive question becomes, “How can technology help me do what I do, better?”

Practical Implementation

This mindset shift opens up possibilities across virtually any field:

  • In creative work: AI tools can handle routine formatting tasks, suggest alternative approaches, or help identify blind spots in your thinking—freeing you to focus on the aspects that require human judgment and creative vision.
  • In knowledge work: Automated systems can gather and organize information, track patterns in data, or draft preliminary analyses—allowing you to devote more time to synthesis, strategy, and communication.
  • In service roles: Digital tools can manage scheduling, documentation, and routine follow-ups—creating more space for meaningful human interaction and personalized attention.

The key is to approach technology adoption strategically, identifying specific pain points or constraints in your work and targeting those areas for enhancement. This requires an honest assessment of where you currently spend time on low-value tasks that could be automated or augmented, and where your unique human capabilities add the most distinctive value.

Developing a Strategic Approach to Content and Information

As content proliferates and attention becomes increasingly scarce, thoughtful approaches to creating, organizing, and consuming information become critical professional and personal skills.

The Power of Diversification

Data from publishing industries shows a clear trend: organizations that diversify their content strategies across formats and channels consistently outperform those that remain narrowly focused. This doesn’t mean pursuing every platform or trend, but rather thoughtfully expanding your approach based on audience needs and behavior.

Consider developing a content ecosystem that might include:

  • Long-form written content for depth and authority
  • Audio formats like podcasts for accessibility and multitasking audiences
  • Visual elements that explain complex concepts efficiently
  • Interactive components that increase engagement and retention

The goal isn’t to create more content but to create more effective content by matching format to function and audience preference.

The Value of Structure and Categorization

As AI systems play an increasingly important role in content discovery and recommendation, structure becomes as important as substance. Content that is meticulously categorized, clearly structured, and designed with specific audience segments in mind will receive preferential treatment in algorithmic ecosystems.

Practical steps include:

  • Developing consistent taxonomies for your content or knowledge base
  • Creating clear information hierarchies that signal importance and relationships
  • Tagging content with relevant metadata that helps systems understand context and relevance
  • Structuring information to appeal to specific audience segments with definable characteristics

This approach benefits not only discovery but also your own content development process, as it forces clarity about purpose, audience, and context.

Cultivating Media Literacy

The flip side of strategic content creation is purposeful consumption. As information sources multiply and traditional gatekeepers lose influence, the ability to evaluate sources critically becomes an essential skill.

Developing robust media literacy involves:

  • Diversifying news sources across political perspectives and business models
  • Understanding the business models that fund different information sources
  • Recognizing common patterns of misinformation and distortion
  • Supporting quality journalism through subscriptions and engagement

This isn’t merely about avoiding misinformation; it’s about developing a rich, nuanced understanding of complex issues by integrating multiple perspectives and evaluating claims against evidence.

Leveraging Data and Knowledge Assets

Every individual and organization possesses unique information assets—whether formal archives, institutional knowledge, or accumulated experience. In a knowledge economy, the ability to identify, organize, and leverage these assets becomes a significant competitive advantage.

Mining Archives for Value

Hidden within many organizations are treasure troves of historical data, past projects, customer interactions, and institutional knowledge. These archives often contain valuable insights that can inform current work, reveal patterns, and provide context that newer team members lack.

The process of activating these assets typically involves:

  1. Systematic assessment of what historical information exists
  2. Strategic digitization of the most valuable materials
  3. Thoughtful organization using consistent metadata and taxonomies
  4. Integration with current workflows through searchable databases or knowledge management systems

Individual professionals can apply similar principles to personal knowledge management, creating systems that help you retain and leverage your accumulated learning and experience.

Building First-Party Data Systems

Organizations that collect, analyze, and apply proprietary data consistently outperform those that rely primarily on third-party information. This “first-party data”—information gathered directly from your audience, customers, or operations—provides unique insights that can drive decisions, improve offerings, and create additional value through partnerships.

Effective first-party data strategies include:

  • Identifying the most valuable data points for your specific context
  • Creating ethical collection mechanisms that provide clear value exchanges
  • Developing analysis capabilities that transform raw data into actionable insights
  • Establishing governance frameworks that protect privacy while enabling innovation

Even individuals and small teams can benefit from systematic approaches to gathering feedback, tracking results, and analyzing patterns in their work and audience responses.

Applying Sophisticated Decision Models

As data accumulates and contexts become more complex, simple intuitive decision-making often proves inadequate. More sophisticated approaches—like stochastic models that account for uncertainty and variation—can significantly improve outcomes in situations ranging from financial planning to product development.

While the mathematical details of these models can be complex, the underlying principles are accessible:

  • Embrace probability rather than certainty in your planning
  • Consider multiple potential scenarios rather than single forecasts
  • Account for both expected outcomes and variations
  • Test decisions against potential extremes not just average cases

These approaches help create more robust strategies that can weather unpredictable events and capture upside potential while mitigating downside risks.

Aligning Work with Purpose and Community

Perhaps the most important theme emerging from diverse sources is the centrality of purpose and community connection to sustainable success and satisfaction. Technical skills and technological adoption matter, but their ultimate value depends on how they connect to human needs and values.

Balancing Innovation and Values

Organizations and individuals that successfully navigate technological change typically maintain a clear sense of core values and purpose while evolving their methods and tools. This doesn’t mean resisting change, but rather ensuring that technological adoption serves rather than subverts fundamental principles.

The process involves regular reflection on questions like:

  • How does this new approach or tool align with our core values?
  • Does this innovation strengthen or weaken our connections to the communities we serve?
  • Are we adopting technology to enhance our core purpose or simply because it’s available?
  • What guardrails need to be in place to ensure technology serves our values?

This reflective process helps prevent the common pattern where means (technology, metrics, processes) gradually displace ends (purpose, impact, values) as the focus of attention and decision-making.

Assessing Growth for Impact

A similar rebalancing applies to personal and professional development. The self-improvement industry often promotes growth for its own sake—more skills, more productivity, more achievement. But research consistently shows that lasting satisfaction comes from growth that connects to something beyond self-interest.

Consider periodically auditing your development activities with questions like:

  • Is this growth path helping me contribute more effectively to others?
  • Am I developing capabilities that address meaningful needs in my community or field?
  • Does my definition of success include positive impact beyond personal achievement?
  • Are my learning priorities aligned with the problems that most need solving?

This doesn’t mean abandoning personal ambition or achievement, but rather connecting it to broader purpose and impact.

Starting Local

While global problems often command the most attention, the most sustainable impact typically starts close to home. The philosopher Mòzǐ captured this principle simply: “Does it benefit people? Then do it. Does it not benefit people? Then stop.”

This local focus might involve:

  • Supporting immediate family and community needs through childcare, elder support, or neighborhood organization
  • Applying professional skills to local challenges through pro bono work or community involvement
  • Building relationships that strengthen social fabric in your immediate environment
  • Creating local systems that reduce dependency on distant supply chains

These local connections not only create immediate benefit but also build the relationships and resilience that sustain longer-term efforts and larger-scale impact.

Integrating Technology and Humanity: A Balanced Path Forward

The themes explored above converge on a central insight: the most successful approaches to contemporary challenges integrate technological capability with human purpose. Neither luddite resistance nor uncritical techno-optimism serves us well; instead, we need thoughtful integration that leverages technology’s power while preserving human agency and values.

This balanced approach involves:

  1. Selective adoption of technologies that enhance your distinctive capabilities
  2. Strategic organization of content and knowledge to leverage both human and machine intelligence
  3. Purposeful collection and analysis of data that informs meaningful decisions
  4. Regular reflection on how technological tools align with core values and purpose
  5. Consistent connection of personal and professional growth to community benefit

No single formula applies to every situation, but these principles provide a framework for navigating the complex relationship between technological advancement and human flourishing.

The organizations and individuals who thrive in coming years will likely be those who master this integration—leveraging AI and other advanced technologies not as replacements for human judgment and creativity, but as amplifiers of human capacity to solve problems, create value, and build meaningful connections.

In a world increasingly shaped by algorithms and automation, the distinctive value of human judgment, creativity, and purpose only grows more essential. By approaching technological change with this balanced perspective, we can build futures that harness innovation’s power while remaining true to the values and connections that give our work—and our lives—meaning.

What steps will you take to implement this balanced approach in your own work and life?