The model matters less than you think. A mediocre prompt on GPT-4 gets a mediocre result. A great prompt on the same model gets a great result. Prompt engineering is the highest-leverage skill you can build right now — and most people skip it entirely.
Here’s the foundation.
What Is Prompt Engineering?
Prompt engineering is the practice of crafting inputs to an AI model to get consistently useful, accurate, and well-formatted outputs. It’s not magic, and it’s not about tricking the AI — it’s about giving the model enough context to do its job well.
Think of it like briefing a contractor. A vague brief gets vague work. A detailed brief — with clear scope, examples, constraints, and desired output format — gets something you can actually use.
The Anatomy of a Good Prompt
Every effective prompt has some combination of these elements:
- Role — Who is the AI playing? What perspective should it bring?
- Context — What does the AI need to know to do this well?
- Task — What exactly do you want it to do?
- Format — How should the output be structured?
- Constraints — What should it avoid? What are the boundaries?
- Examples — (optional) Show it what good looks like
You don’t need all six every time. But the more you include, the more control you have.
The Most Common Mistakes
Too Vague
Bad: “Write me a blog post about AI.”
Good: “Write a 600-word blog post for small business owners who are skeptical about AI. The tone is direct and practical — no jargon. Lead with a specific problem they face (too much time on admin tasks), then explain how AI can help with 3 concrete examples. End with a single clear CTA to sign up for the free library.”
The bad version leaves everything up to the model. The good version constrains the model to do exactly what you want.
No Role or Persona
When you tell the model who it is, you dramatically improve output quality. Compare:
Without role: “Review this marketing email.”
With role: “You are a direct-response copywriter with 10 years of experience writing high-converting email campaigns. Review this marketing email and identify the 3 biggest weaknesses. Be specific and critical — I need honest feedback, not cheerleading.”
The role tells the model what frame to use for its judgment. You’ll get markedly different (and better) responses.
No Format Specification
If you don’t specify format, you’ll get whatever the model defaults to — which may be a wall of prose when you wanted bullet points, or markdown when you wanted plain text.
Always specify:
- Length (word count, number of items, etc.)
- Structure (paragraphs, bullet list, numbered steps, table)
- Tone (professional, casual, direct, warm)
- Any format to avoid (no markdown, no headers, no preamble)
Asking for Too Much at Once
Complex multi-part tasks get muddled answers. Break them up:
Instead of: “Research the top 5 competitors, write a comparison table, suggest our positioning, and draft a landing page.”
Do: Four separate prompts. Research first. Table second. Positioning third. Landing page last — after you’ve reviewed and refined the inputs.
Each prompt builds on the last. Quality compounds.
The System Prompt (For Agents)
If you’re building AI agents, the system prompt is the most important thing you’ll write. It’s the persistent set of instructions that defines who the agent is and how it behaves — loaded at the start of every session.
A good system prompt includes:
- Identity: Who is this agent? What’s its name, role, and purpose?
- Behavior rules: How should it communicate? What tone? What to prioritize?
- Scope: What is it responsible for? What is explicitly outside its scope?
- Constraints: What should it never do? (critical for safety)
- Context: What does it need to know about the environment it’s operating in?
You are a customer support agent for Acme Software. Your job is to help users troubleshoot issues with their account. Tone: Friendly, clear, concise. Never use jargon. Scope: Account issues, billing questions, basic feature questions. Out of scope: Bug reports (escalate to engineering), custom pricing (send to sales). Rules: - Never guess at answers. If you don't know, say so and offer to escalate. - Always ask for the user's account email before looking anything up. - Do not discuss competitor products. You have access to the following tools: search_knowledge_base, lookup_account, create_ticket.
Short, specific, and opinionated. That’s the target.
Useful Techniques
Chain of Thought
For complex reasoning tasks, ask the model to think out loud before giving its answer: “Think through this step by step before giving your final answer.” This dramatically improves accuracy on analytical and logical tasks.
Few-Shot Examples
Show the model 2–3 examples of what good output looks like before asking it to do the task. This is especially useful for formatting, tone matching, or domain-specific tasks where the model might default to generic behavior.
Here are examples of the style I want: INPUT: "Our refund policy is 30 days." OUTPUT: "We offer a 30-day no-questions-asked refund." INPUT: "Setup takes 5 minutes." OUTPUT: "You'll be up and running in under 5 minutes." Now rewrite the following in the same style: "Our platform integrates with over 50 tools."
Negative Constraints
Tell the model what not to do. Models often default to hedge words, filler phrases, and generic outputs unless you explicitly rule them out.
Examples:
- “Do not start with ‘Great question’ or any filler phrase.”
- “Do not use bullet points — write in flowing prose.”
- “Do not hedge. Be direct and specific.”
- “Do not repeat the question back to me.”
Temperature and Determinism
If your AI platform lets you set temperature, lower values (0.0–0.3) make outputs more consistent and factual. Higher values (0.7–1.0) make outputs more creative and varied. For most business tasks, lower temperature is better. For brainstorming, higher temperature is worth trying.
Iterating on Prompts
Your first prompt is almost never your best prompt. Iteration is the job.
When output isn’t right, diagnose which part of the prompt caused it:
- Output too generic? Add more context and examples.
- Wrong format? Specify it explicitly.
- Off-topic? Tighten the scope constraints.
- Wrong tone? Add a role or more tone instructions.
- Hallucinating facts? Add “Only state things you are confident are true. If uncertain, say so.”
Keep a prompt library. When you find a prompt that works reliably, save it. Reusable prompts are a genuine business asset.
A Template to Start With
Here’s a general-purpose template you can adapt for most tasks:
You are [ROLE]. Context: [RELEVANT BACKGROUND] Task: [EXACTLY WHAT YOU WANT] Format: [STRUCTURE, LENGTH, TONE] Constraints: [WHAT TO AVOID] [OPTIONAL: Here are examples...]
Fill it in, tweak it until the output is right, then save it for reuse.
What’s Next
Once you’re comfortable with the basics, the next step is learning how to write system prompts for AI agents — where prompt engineering compounds into something genuinely powerful. The same principles apply, but at the agent level, the stakes are higher: a bad system prompt means a bad agent running indefinitely.
The prompt templates and agent configs in the Ask Patrick Library are pre-engineered to work well out of the box — a good place to see what well-crafted prompts look like in practice.
Want copy-paste prompt templates?
The Ask Patrick Library has pre-built prompt frameworks and agent configs you can use immediately.
Get Access — It’s FreeNo credit card. No fluff. Just the good stuff.