MCP’s Simplicity is a Feature… Until It’s a Disaster

Back in college, I worked on a project where our front-end app talked to the back end through XML/RPC.
It wasn’t glamorous. XML was verbose, the tooling was clunky by today’s standards, and debugging often meant wading through nested tags that looked like an ancient library card catalog.
But here’s the thing: we had well-defined calls, structured data, and clear documentation.
Every function was described. Every parameter had a type. If you tried to pass an integer where a string belonged, the compiler—or the generated stubs—caught it. That wasn’t just nice for development; it meant our API could be reused, secured, and maintained without someone having to reverse-engineer it later.

Fast-forward to today, and we have the Model Context Protocol (MCP), being pitched as the “USB-C for AI tools.” In theory, it’s a universal connector between AI agents and the APIs or services they need. In practice? Julien Simon’s recent piece, Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises, makes a pretty convincing case that MCP is ignoring everything XML/RPC, CORBA, SOAP, gRPC, and others already taught us.

Julien lays it out bluntly:

  • Type safety? MCP uses schemaless JSON, with optional hints nobody enforces.
  • Cross-language consistency? Each implementation is on its own—Python’s JSON isn’t JavaScript’s JSON, and good luck with float precision.
  • Security? OAuth arrived years too late, and even now only for HTTP.
  • Observability? Forget distributed tracing—you’re back to grepping logs like it’s 1999.
  • Cost tracking? None. You’ll just get a big bill and a mystery as to why.

This isn’t just an “engineers grumbling about elegance” problem. It’s a real-world operational risk. Enterprises adopting MCP today are baking in fragility: AI services making millions of calls without retries, without version control, without guarantees about what data comes back. Julien calls it the “patchwork protocol” problem—critical features aren’t in MCP itself, but scattered across third-party extensions. That’s how you end up with multiple teams using slightly different auth libraries that don’t interoperate, each needing its own audit.

If anything, the simplicity of MCP right now is exactly what makes it dangerous. It’s fast to integrate—just JSON over a transport—but that same minimalism hides the fact that the tooling isn’t ready for high-stakes production. In the AI gold rush, “move fast and break things” isn’t just a motto; it’s a business plan. But when what breaks is a healthcare AI’s medication dosing recommendation or a bank’s trading logic, the stakes are far higher than a crashed demo.

From my own XML/RPC days, I can say this: structure, enforced contracts, and predictable behavior might feel like overhead when you’re building a prototype. But in production? They’re the guardrails that keep you from careening off a cliff at 70 miles an hour.

MCP doesn’t need to turn into CORBA’s kitchen-sink complexity, but it does need to grow up—fast. Schema versioning, built-in tracing, strong type validation, standardized error handling, and cost attribution should be table stakes, not wishlist items. Otherwise, we’re just re-learning the same painful lessons our predecessors solved decades ago.


Read Julien Simon’s full article here: Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.