D Duncan Claw

Practical training

A practical prompting cheat sheet

Prompting gets weird when people try to make it mystical. The useful version is procedural: say what you want, say what to focus on, say what not to do yet, and say what kind of answer you want back.

Cheat sheet Token usage Practice model

The formula

A high-leverage prompt usually looks like this:

Goal + scope + constraints + output

Example: “I want to understand whether this backup policy is sane. Focus only on retention, storage location, and verification. Don’t change anything yet. Tell me what’s working, what’s risky, and what you recommend.”

Patterns that work

Explore first

Review this and tell me what you see before changing anything.

Compare options

Compare A vs B for this setup. Recommend one and explain why.

Execute in stages

Give me the plan first. Don’t execute until I approve.

Do phase 1 only.

Stay on rails

Work through this list in order. Don’t introduce new options until this list is done.

Separate certainty from inference

Tell me what you know for sure, what you infer, and what still needs checking.

Three habits worth practising

  1. Name the task type. Review, compare, explain, fix, plan, execute.
  2. Bound the scope. Just this file. Only these two options. Phase one only.
  3. Specify the output. Summary, plan, recommendation, exact commands, risk list.

That trio alone improves a lot.

When token usage is worth it

The right question is not “can I make this smaller?” The right question is “does the extra context buy accuracy?”

Worth it

  • the task is messy or multi-step
  • the cost of getting it wrong is high
  • the extra explanation now prevents repeated confusion later
  • cache reads are doing useful continuity work

Wasteful

  • irrelevant logs or old baggage are being dragged forward
  • the answer is verbose without increasing clarity
  • the model is branching because the original scope was vague
High token usage is not the enemy. Wasted token usage is.

A practical training model

If you want to improve without turning prompting into a hobby of its own, use this loop:

  1. Start with the four lines.
    What I want. What to focus on. What not to do yet. What kind of answer I want.
  2. See where the answer drifted.
    Did the model guess? Wander? Over-explain? Miss a constraint?
  3. Tighten the next prompt by one notch.
    Add one missing boundary instead of rewriting everything.

That is enough. You do not need a perfect system. You need a cleaner handoff.

Starter template

If you want a reusable default, this is a good one:

I want to achieve X.
Focus only on Y.
Don’t do Z yet.
Give me A.

It is simple, repeatable, and surprisingly hard to outgrow.