Stop Arguing, Start Asking: Why Prompt Literacy is the Next Universal Skill

AI isn’t magic. It’s not malicious. It’s not even confused.
It’s a tool — and like any tool, what you get out of it depends on how you use it.

Two recent takes — Linda Ruth’s Stop Arguing with AI: Prompting for Power in the Publishing World and Kelvin Chan’s AP piece One Tech Tip: Get the most out of ChatGPT and other AI chatbots with better prompts — arrive at the same destination from different roads. One speaks to editors and publishers; the other to everyday AI users. But the core message is identical: the quality of your AI output starts and ends with the quality of your input. (Hat tip to the always-essential BoSacks newsletter where I first spotted both articles.)

From the Newsroom to Your Laptop: Same Rule, Different Context

Ruth frames AI as part of a publishing professional’s toolkit — right up there with headline writing and layout design. If you ask a model for feedback on a manuscript without providing the manuscript, expect nonsense in return. It’s like asking a book reviewer to critique a novel they haven’t read.

Chan’s advice mirrors this in broader strokes: skip vague prompts, give clear goals, and feed the model context and constraints. Add personas to shape tone, specify your audience, and don’t be afraid to iterate. The first prompt is rarely the last.

The Practitioner’s Mindset

Whether you’re an editor, marketer, small business owner, or teacher, three habits will instantly improve your AI game:

  1. Provide context — the more background you give, the better the results.
  2. Set constraints — word count, format, style — so you get something usable.
  3. Iterate — treat AI as a collaborator, not a vending machine.

Think of AI as a “brilliant but distractible employee”: give it structure, keep it focused, and check its work.

The Bigger Picture

The skeptic will say this is common sense — ask better questions, get better answers — and they’re right. But prompt literacy is becoming a baseline skill, much like search literacy was twenty years ago. The contrarian might argue AI should adapt to us, not the other way around. The systems thinker sees a familiar pattern: early adopters learn the machine’s language, then the tools evolve until the complexity disappears behind the scenes.

Until that happens, prompt engineering is the bridge between what AI can do and what it will actually do for you.


Turn questions into results. Don’t just wonder what AI can do — start guiding it. Download my free, printable AI Prompt Quick Guide for proven prompt formulas you can use today.


Action Steps You Can Use Today

  • Create a personal or team prompt library for recurring tasks.
  • Refine in conversation — don’t settle for the first draft.
  • Experiment with personas and audiences to see how the output shifts.
  • Always verify — a polished answer can still be wrong.

In short: Master the prompt, master the tool — and in mastering the tool, you expand your reach.

Coding with AI on a Budget — My Structured, Multi-Model Workflow

I’m a heavy AI user, but I don’t just open ten tabs and throw the same question at every model I can find. My workflow is deliberate, and every model has a defined role in the process.

It’s not that the “AI buffet” style is wrong — in fact, this excellent guide is a perfect example of that approach. But my own style is more like a relay race: each model runs its leg, then hands off to the next, so no one’s doing a job they’re bad at.


Phase 1 — Discovery & Requirements (ChatGPT as the Architect)

When I’m starting something new, I begin with long back-and-forth Q\&A sessions in ChatGPT.

  • The goal is to turn fuzzy ideas into clear, testable requirements.
  • I’ll ask “what if?” questions, explore trade-offs, and refine scope until I have a solid first draft of requirements and functional specs.

Why ChatGPT? Because it’s great at idea shaping and structured writing — and I can quickly iterate without burning expensive tokens.


Phase 2 — Critique & Refinement (Gemini as the Critic)

Once I have a draft, I hand it over to Gemini 2.5 Pro.

  • Gemini acts like a tough peer reviewer — it questions assumptions, spots gaps, and points out edge cases.
  • I take Gemini’s feedback back to ChatGPT for edits.
  • We repeat this loop until the document is solid enough to hand off to implementation.

This step makes the coding phase dramatically smoother — Claude Code gets a blueprint, not a napkin sketch.


Phase 3 — Implementation (Claude Code as the Builder)

With specs locked in, I move to Claude Code for the actual build.

  • I prep the context using a tool like AI Code Prep GUI to include only the relevant files.
  • Claude Code follows instructions well when the brief is crisp and the noise is low.
  • This is where the investment in phases 1 and 2 pays off — Claude isn’t guessing, it’s executing.

Phase 4 — Specialist Consultations (Free & Budget Models)

If something tricky comes up — a gnarly bug, architectural uncertainty — I call in a specialist.

  • For deep problem-solving: o3, GLM 4.5, Qwen3 Coder, Kimi K2, or DeepSeek R1.
  • For alternate perspectives: Poe.com’s Claude 4, OpenRouter’s o4-mini, or Perplexity for research.
  • The point is diagnosis, not doing the build work.

These models are often free (with daily credits or token grants) and help me avoid overusing paid API calls.


Why This Works for Me

  • Role clarity: Each model does the job it’s best at.
  • Lower costs: Expensive models are reserved for hard, high-value problems.
  • Better output: The spec→critique→build pipeline reduces rework.
  • Adaptability: I can swap out models as free-tier offers change.

Closing Thoughts

AI isn’t magic — you are.
The tools only work as well as the process you put them in. For me, that process is structured and deliberate. If you prefer a more exploratory, multi-tab style, check out this free AI coding guide for an alternate perspective. Both approaches can work — the important thing is to know why you’re using each model, and when to pass the baton.