LLMs Today: What’s Really New, and What’s Just Polished?

Curious whether the latest breakthroughs in large language models are truly revolutionary or just clever refinements? In “The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design,” Sebastian Raschka unpacks how today’s most powerful open AI models—from DeepSeek and Kimi to Llama 4 and Gemma—are still built on the classic transformer architecture, with progress coming from ingenious optimizations like smarter attention, memory-saving tricks, and new ways to keep training stable. Whether you’re choosing between open models or debating sticking with OpenAI or Anthropic, the real story is one of evolution, not revolution. This post breaks down what’s really changed, what hasn’t, and which model might fit your needs best.

 

In Defense of Sharing AI Output: Why “AI Slop” Isn’t the End of Meaningful Communication

Rethinking proof-of-thought, noise, and the upside of a more open AI culture. Is sharing ChatGPT output really so rude?A recent essay compares AI-generated text to a kind of digital pollution—a “virus” that wastes human attention and diminishes the value of communication. The author proposes strict AI etiquette: never share machine output unless you fully adopt …