Back to Learn
Beginner · 101· 8 min read

How to Write a Good Prompt: A Beginner's Framework

The 3-part framework that fixes most bad AI output: role, task, and context. Written for beginners, plain English, concrete examples, no magic.

The single biggest thing separating "I get mediocre AI output" from "I get consistently good AI output" isn't the model you use, or how much you pay, or some secret prompting hack. It's whether you bothered to structure the request instead of typing your thoughts directly.

Good news: the structure is simple and takes about 5 minutes to learn.


The 3-Part Framework

Write every prompt in three parts, in this order:

  1. Role, who the AI is
  2. Task, what you want done
  3. Context, what the AI needs to know to do it well

That's it. Everything else is a refinement on top.


Part 1: Role

The role tells the AI which expertise to bring, which tone to use, and how to weigh trade-offs. Without a role, you get generic, wishy-washy output.

Weak: "Help me with my marketing."

Strong: "You are a B2B marketing strategist with a background in enterprise SaaS. You help product-led growth teams think through messaging, channel mix, and positioning."

The weak version produces generic marketing advice. The strong version produces advice that matches the tone and framing of B2B SaaS. Same AI, wildly different output.


Part 2: Task

Specific, actionable, scoped. Vague tasks produce vague output.

Weak: "Write something about our product."

Strong: "Write a 150-word LinkedIn post announcing our new workflow automation feature. Target audience: operations leaders evaluating AI tools. Goal: drive click-throughs to the product page."

Notice what the strong version adds:

  • Format (150 words, LinkedIn post)
  • Topic (new workflow automation feature)
  • Audience (ops leaders evaluating AI tools)
  • Goal (drive click-throughs)

All of this is task scope. Without it, the AI guesses. With it, you get output on target.


Part 3: Context

The single biggest quality lever. The AI doesn't know your company, your product, your audience, your history. If you want on-brand output, you have to tell it the brand.

Weak: "Write a product announcement."

Strong: "Write a product announcement for [PRODUCT NAME]. Our tone is direct, confident, slightly technical, we don't use marketing-speak like 'unlock' or 'leverage.' Our audience is technical operations leaders who've evaluated and rejected other AI tools for being too generic. Our product's differentiator is that it handles complex, conditional workflows that other tools break on."

Now the AI has something real to work with. Context can be:

  • Background on your company, product, audience
  • Your voice and tone preferences
  • Past content that worked well (see "Examples" below)
  • Specific constraints, things to do or avoid
  • Recent relevant history

Rule of thumb: if a human expert would need to ask a clarifying question before drafting, the answer belongs in your prompt's context.


A Complete Example: Before and After

Let's fix a bad prompt using the framework.

The bad version:

Write a cold email to someone at a company I want to sell to.

What you'll get back: a generic cold email with "I hope this email finds you well," some vague value proposition, and no specific reason to care. Useless.

The same prompt, rebuilt:

You are a senior B2B sales writer at a workflow automation firm. (role)

Write a 120-word cold email to a VP of Operations at a mid-sized SaaS company. The goal is to book a 20-minute discovery call. (task)

Context:

  • Our product: custom AI automation for ops workflows, specifically scoped builds, not a SaaS tool
  • The prospect's recent LinkedIn post mentioned their team struggling with onboarding scalability, reference it specifically
  • Our closest case study: we cut onboarding time 60% for a similar-sized SaaS
  • Constraints: no "I hope this finds you well," no "leverage" or "unlock" as verbs, no mention of our product until sentence 3
  • Sign off as "Ryan" with no title or company line

That prompt produces output that could actually land a meeting. The difference isn't the AI, it's the structure.


Add This Next: Output Format

Once you've nailed the Role + Task + Context basics, add one more piece for nearly every prompt: format specification.

If you don't tell the AI how to structure the response, it picks, and it usually picks something verbose that you have to rewrite. Specify.

For emails
'2 paragraphs, specific CTA in the last sentence, sign off as [NAME]'
For analyses
'Numbered list. Each item: claim (1 sentence), evidence (1 sentence), implication (1 sentence)'
For summaries
'3 sections: Decisions, Action Items, Open Questions. Bullets under each.'
For drafts
'Markdown. Headings for each section. 2–3 paragraphs per section.'
For extractions
'JSON with fields: name, date, priority, owner. Use null if the field isn't in the source.'

Four seconds of specifying format saves four minutes of reformatting the response.


One Technique Worth Knowing: Few-Shot Examples

Once you're comfortable with the framework, the single best way to level up your prompts is adding examples, 2 to 5 demonstrations of the exact pattern you want.

Without examples
AI guesses at your intended tone, format, or pattern
Output is on-average but rarely exactly what you wanted
With 2–5 examples
AI matches the demonstrated pattern
Output aligns with what you're looking for the first time

Example, you want to generate LinkedIn posts in a specific voice. Instead of describing the voice in three paragraphs:

Write LinkedIn posts matching the voice of these examples:

Example 1: "A CFO pulled up a vendor's ROI calculator mid-meeting last month: 'They're telling us we'll save $2.3M year one.' I asked what the all-in cost was. Two inputs: seat licenses, implementation fee. We rebuilt it. Year-one savings: $340K. Still worth doing. Not $2.3M. Vendor math is a sales tool, not an analysis tool."

Example 2: "Most teams trying AI stall at the same place: they built a demo that impressed leadership, and six months later the demo is still a demo. The gap between 'works in a notebook' and 'runs our business' is where most projects die. It's a scoping problem, not a model problem."

Now write a new post about [TOPIC].

Two examples teaches the AI the voice better than 10 paragraphs of describing it. This pattern is called few-shot prompting and it's one of the most reliable quality lifts available.


Common Beginner Mistakes

Typing your thought process instead of a prompt
'Hmm I need to write an email to this prospect about our product', this is a thought, not a prompt. Rewrite it as structured instructions.
Being vague about the task
'Help me with X' gives vague output. Replace with 'draft / analyze / summarize / compare' + specifics.
Assuming the AI knows context
Your company, your audience, your voice, the AI knows none of it unless you say it. Paste the context in.
Skipping the output format
If you don't specify, you'll spend time reformatting. Specify structure up front.
Not iterating
First responses are rarely perfect. Good prompt users refine 2–4 times and save the final version as a template.
Trying to do too much at once
One prompt asking for analysis + recommendation + summary + draft produces mediocre versions of all four. Break it into separate prompts.

Want to Go Deeper?

This was the beginner framework. If you want the full version, including the 6-component structure, few-shot patterns, chain-of-thought, and 17 copy-able example prompts across different business scenarios, read the pillar post:

The Anatomy of a Great Prompt

For a ready-to-paste setup for your team, see the Claude Project Starter Pack, 6 role-specific configurations you can use immediately.


Ready for the deep version? Read The Anatomy of a Great Prompt, the 6-component framework with 17 copy-able examples. Or grab the Claude Project Starter Pack for role-specific configurations you can paste into Claude today.