In a Thursday scoping call last month, a marketing director asked me to review her team's ChatGPT prompts for a weekly competitor-analysis workflow. I read through five of them. Every single one started with "Analyze the following competitor content and provide insights." No role. No context about her business. No output format. No examples of what "insights" meant to her.
The outputs she was getting back were generic, bland, and barely useful — so her team had stopped using the workflow. She thought the AI was the problem. It wasn't. The prompts were the problem, and the prompts were fixable in about twenty minutes.
This is where most teams are right now. Great AI output isn't about finding the right "magic words." It's about structure — giving the model what it needs to actually reason about your specific situation.
The Six Components of a Great Prompt
Think of a prompt less like a question and more like a mini specification document. You're giving a capable but context-free assistant everything they'd need to produce the right output on the first try.
1. Role — Who the AI is in this interaction
The role tells the model which knowledge base to prioritize, which tone to adopt, and how to weigh trade-offs. Missing role = generic, tentative output.
Weak: "Help me with my email." Strong: "You are an executive communications specialist who writes concise, direct emails for a VP of Operations in a B2B SaaS company."
2. Task — What you want done
Specific, actionable, scoped. Vague tasks produce vague output.
Weak: "Write something about our product." Strong: "Write a 150-word LinkedIn post announcing our new workflow automation feature, targeted at operations leaders evaluating AI tools."
3. Context — Background the model needs to succeed
The single biggest quality lever. The model doesn't know your company, your audience, your history, your product — unless you tell it.
Weak: "Write a cold email to a prospect." Strong: "Write a cold email to [TARGET: VP of Customer Success at a 200-person B2B SaaS company]. They recently posted on LinkedIn about struggles scaling their onboarding process. Our product [NAME] reduces onboarding time by 60% for companies at their scale. We have 3 case studies in SaaS with similar-sized customer bases."
4. Constraints — What NOT to do, and hard boundaries
Constraints prevent the most common failure modes. Word limits, forbidden phrases, topics to avoid, tone boundaries.
Weak: [no constraints, hope for the best] Strong: "Constraints: Under 150 words. No marketing-speak ('leverage,' 'unlock,' 'solutions'). No generic openers ('I hope this finds you well'). Include one specific observation about their situation."
5. Output Format — How the response should be structured
If you don't specify, the model picks — and usually picks something verbose and uncitable. Specify.
Weak: [model defaults to prose, maybe with headers if it feels like it] Strong: "Format: Two paragraphs. First paragraph: reference their LinkedIn post and connect to our offering. Second paragraph: propose a 20-minute call with a specific next step. Sign off with 'Ryan'. No subject line needed."
6. Examples — Show, don't just tell
Few-shot examples (2–5 demonstrations of the desired pattern) reliably lift output quality more than any amount of additional description. Especially for format, tone, or structure.
Weak: "Write in a conversational tone." Strong: [prompt followed by] "Example of the tone I want: 'Honestly, the first version we shipped was embarrassing. Here's what we changed and why it works now.' Example 2: 'The boring answer nobody wants to hear: measure the baseline first. Everything else is downstream.'"
A Complete Prompt, Assembled
Here's what a great prompt looks like when all six components are in place:
That's the whole framework. Now the examples.
20+ Prompts You Can Copy and Modify
Each prompt below follows the six-component structure. Replace the bracketed fields with your specifics. Save the ones that fit your workflow — these are templates, not one-offs.
Content & Marketing
1. LinkedIn thought-leadership post
You are a marketing engineer writing short, opinionated LinkedIn posts for a
founder audience. Write a 120-150 word post on [TOPIC].
Context: I'm positioning as an AI tool builder for growth teams. My audience
is marketing and ops directors at mid-market and enterprise companies.
Structure:
- Open with a specific moment (a real scene, not a thesis)
- State the contrarian or non-obvious point
- Back it with a short proof or specific
- Close with either a question or a single actionable takeaway
Constraints: No emojis. No "In today's landscape." No three-part
parallelism. Sentence length should vary visibly.
Example of the voice I want:
"A CFO pulled up a vendor's ROI calculator mid-meeting last month:
'They're telling us we'll save $2.3M year one.' I asked what the all-in cost
was in their model. Two inputs: seat licenses and implementation fee. We
rebuilt it with actual line items. Year-one savings: $340K. Still worth
doing. Not $2.3M. Vendor math is a sales tool, not an analysis tool."
2. Blog post outline from a rough idea
You are a senior content strategist at a B2B marketing agency. I want a
detailed outline for a long-form blog post on [TOPIC].
Context: Target audience is [AUDIENCE]. The post should rank for the
query "[TARGET QUERY]" and appeal to the reader's [primary pain point].
Output format:
- Working title (3 variants)
- Meta description under 155 chars
- H2 outline with 1-2 sentence descriptions for each section
- Suggested hook (first 2-3 sentences)
- Suggested closing CTA
- 3 related internal links we should include
Constraints: No generic listicle format. Each H2 should be a
non-obvious claim or counter-intuitive insight, not a topic label.
3. Email subject line A/B variants
You are a senior email copywriter. Generate 10 subject line variants for
an email to [AUDIENCE] about [OFFER].
Context: The email body's opening hook is: "[PASTE OPENING HOOK]". Open
rate benchmark for this list is [X]%.
Variants should span 5 angles:
- Curiosity / open loop
- Specific number or benchmark
- Direct / no-bullshit
- Problem-first (name their pain)
- Contrarian framing
Format: Numbered list. Each line: subject | 2-word tag for the angle
used | estimated open-rate lift vs. baseline (low/med/high).
4. Case study narrative from raw notes
You are a technical case study writer for a consulting firm. Turn the
rough project notes below into a structured case study narrative.
Context: The client is [ANONYMIZED ROLE/INDUSTRY]. Our firm builds custom
AI tools. The case study will appear on our website and in sales
conversations.
Notes:
[PASTE RAW NOTES]
Output structure:
- Problem (1 paragraph, specific)
- Approach (2-3 paragraphs, architectural)
- Results (3-5 numbered outcomes with metrics)
- Lesson transfer (1 paragraph: what this teaches us that applies to
future engagements)
Constraints: Do not invent metrics or details not in the notes.
Anonymize any client-identifying information. Voice should be
confident and technical, not hype.
Analysis & Research
5. Competitor content audit
You are a content strategist. Analyze the following competitor content
and provide a structured audit.
Competitor URL / content: [PASTE OR DESCRIBE]
Our positioning: [1-SENTENCE DESCRIPTION OF YOUR COMPANY'S ANGLE]
Our target audience: [DESCRIPTION]
Output format:
- Strengths (3 specific things they do well)
- Weaknesses (3 specific gaps or weak points)
- Differentiation opportunities (3 specific angles we could take
that they're not covering)
- SEO signals (target keywords, word count, structure patterns)
- Recommended response (what content we should produce in the
next 30 days to compete)
Constraints: Be specific. No generic observations like "good SEO"
or "clear voice" — tell me exactly what's working and why.
6. Meeting notes → action items
You are an operations chief of staff. Read the following meeting
transcript and produce a structured summary.
Transcript:
[PASTE TRANSCRIPT]
Output format:
## Decisions Made
- Decision, with owner, with date
## Action Items
- Action, owner, due date, dependency (if any)
## Open Questions
- Question, who needs to resolve, deadline
## Risks / Flags
- Risk, severity (low/med/high), who should know
Constraints: Do not invent owners or dates not stated in the meeting.
If something was discussed but no clear decision was made, put it under
Open Questions, not Decisions.
7. Strategic SWOT from context docs
You are a senior strategy consultant. Build a SWOT analysis for
[COMPANY/PRODUCT] based on the context provided.
Context:
[PASTE RELEVANT DOCS: product overview, market context, recent
performance data, competitive landscape summary]
Output format:
- Strengths (4-6 items, each with a specific data point or proof)
- Weaknesses (4-6 items, each with the business implication)
- Opportunities (4-6 items, each with an estimated impact tier: S/M/L)
- Threats (4-6 items, each with a time horizon: near-term, mid, long)
- Three strategic priorities that emerge from the intersection
Constraints: Tie every SWOT item to a specific data point from the
context, not generalities. Priorities must be mutually exclusive
(not "do X and its opposite").
8. Market research synthesis
You are a B2B market researcher. Synthesize the following research
sources into a single executive-ready briefing.
Sources:
[PASTE EXCERPTS FROM MULTIPLE ARTICLES / REPORTS]
Output format:
## Top 3 insights
- Each insight: 1-sentence claim + 1-sentence "why this matters to us"
## Supporting evidence
- Bulleted, with source attribution for each claim
## Uncertainties
- Where sources disagree or evidence is thin
## Recommended follow-ups
- 2-3 specific questions worth investigating further
Constraints: Cite which source each claim came from. If an insight is
your synthesis (not directly stated in a source), mark it as [derived]
rather than attributing it to a source.
Coding & Technical
9. Debugging a specific error
You are a senior engineer specializing in [LANGUAGE/FRAMEWORK]. Help me
debug this error.
Context:
- Stack: [Node.js 20, Next.js 15, TypeScript, Postgres, etc.]
- What I was trying to do: [GOAL]
- What I tried: [APPROACH]
- What happened: [ERROR OUTPUT]
Code:
[PASTE RELEVANT CODE — MINIMAL REPRO IF POSSIBLE]
Output format:
1. Most likely root cause (one sentence)
2. Why this happens (2-3 sentences of explanation)
3. Recommended fix (code snippet)
4. 1-2 other possible causes if the first doesn't resolve it
Constraints: Do not suggest rewriting the whole file. Give me the
minimal change. If my approach is fundamentally wrong, say so in the
root-cause section rather than silently fixing it.
10. Code review
You are a senior engineer doing a code review. Review the following diff
and give focused feedback.
Context: This code is for [FEATURE DESCRIPTION]. Our codebase uses
[PATTERNS / CONVENTIONS]. My experience level is [level].
Diff:
[PASTE DIFF]
Output format:
## Must fix (blocking)
- Issue, location, why it's blocking, suggested fix
## Should fix (strong recommendation)
- Issue, location, rationale
## Consider (nit / style / optional)
- Issue, location
## What's good
- 1-2 things the diff does well (don't invent these)
Constraints: Focus on correctness, security, and readability. Don't
bikeshed formatting that a linter would catch. If the diff is small
and clean, say so — don't manufacture issues.
11. Explaining unfamiliar code
You are a senior engineer tutoring a junior developer. Explain what the
following code does, at the level appropriate for someone who knows
[LANGUAGE] basics but hasn't seen [SPECIFIC PATTERN / LIBRARY] before.
Code:
[PASTE CODE]
Output format:
1. One-sentence summary of what the whole thing does
2. Line-by-line (or block-by-block) explanation
3. What would break if you removed each key piece
4. Where this pattern is typically used (1-2 real-world examples)
Constraints: Don't skip the "why." Explain the reasoning behind the
patterns, not just the mechanics. If there's a common pitfall, call it
out.
Strategy & Decision-Making
12. Build-vs-buy framework for a specific capability
You are a strategy consultant. Analyze whether we should build or buy
[SPECIFIC CAPABILITY].
Context:
- Our company: [DESCRIPTION]
- The capability: [DETAILED DESCRIPTION]
- Current off-the-shelf options: [LIST IF KNOWN]
- Our technical capacity: [TEAM SIZE / STACK]
- Budget envelope: [RANGE]
- Timeline pressure: [URGENT / MODERATE / FLEXIBLE]
Output format:
## Recommendation
[Build / Buy / Hybrid], with a one-sentence headline reason.
## Why
3-5 specific factors supporting the recommendation.
## Key assumptions
3-4 things this recommendation depends on.
## When this is wrong
Name the specific situation where the recommendation would flip.
## Next 3 steps
Concrete actions to validate the call.
Constraints: Don't recommend "it depends" without naming the specific
factors. If the decision is genuinely close, say so and explain the
tradeoff rather than picking arbitrarily.
13. Pricing strategy exploration
You are a senior pricing strategist for B2B SaaS. Help me think through
pricing for [PRODUCT].
Context:
- Product: [DESCRIPTION]
- Target customer: [ICP]
- Value delivered: [SPECIFIC BUSINESS OUTCOMES + TYPICAL DOLLAR IMPACT]
- Competitive pricing: [WHAT YOU KNOW]
- Our goal with this pricing: [MAX REVENUE / MAX ADOPTION / MAX
PERCEIVED VALUE / ENTERPRISE EXPANSION / ETC.]
Output format:
## 3 candidate pricing models
Each with: structure, typical price range, pros, cons, best-fit customer.
## Recommended model
One paragraph on why, tied to our stated goal.
## Key tests before committing
2-3 things to validate with customers or data.
## Risks
What could go wrong with the recommended approach.
Constraints: Don't default to per-seat pricing. Consider value-based,
outcome-based, and hybrid models seriously.
14. Quarterly planning brainstorm
You are a strategic planning partner. Help me set priorities for next
quarter.
Context:
- Company stage: [STAGE]
- Last quarter's outcomes: [WHAT HIT, WHAT MISSED]
- Current biggest constraint: [TIME / PEOPLE / MONEY / ATTENTION]
- Non-negotiable commitments: [LIST]
- Open bets: [LIST OF POTENTIAL PROJECTS]
Output format:
## Top 3 recommended priorities
Each with: what it is, why it's top-3, what success looks like, what
it requires.
## Explicit NOT-doing list
Things to deprioritize this quarter, with the reasoning.
## Watch list
Things we're not doing this quarter but should track.
## Questions to validate with the team
2-3 things I should test with the team before committing.
Constraints: Force a real NOT-doing list. Don't let me say yes to
everything. If my open bets are inconsistent, flag the conflict.
Admin & Productivity
15. Pre-meeting brief for a call
You are a chief of staff. Prepare me for a meeting with [COUNTERPART].
Context:
- Meeting purpose: [WHAT IT'S ABOUT]
- Who's attending: [LIST]
- What I need from this meeting: [SPECIFIC OUTCOME]
- Relevant background: [PASTE CONTEXT DOCS OR DESCRIBE]
Output format:
## Meeting goal in one sentence
What I'm trying to leave with.
## 3-5 questions I should ask
Structured from open-ended to specific.
## 2-3 likely objections or pushbacks
What the counterpart might raise, with suggested responses.
## The one thing not to forget
The most important point to make or confirm.
## Post-meeting action items
Assuming the meeting goes well, what I'll need to follow up on.
Constraints: Don't invent facts about the counterpart. If you don't
have info on them, say so and suggest where to find it.
16. Weekly status update
You are my chief of staff. Turn my rough notes into a polished weekly
status update for [AUDIENCE: team / exec / client].
Rough notes:
[PASTE BULLETED NOTES OR STREAM OF CONSCIOUSNESS]
Output format:
## This week's wins (3-5 bullets)
## Progress on goals (short paragraph per goal)
## Blockers / help needed (concrete asks)
## Next week's focus (top 3)
## One metric worth watching
Constraints: Match the tone to the audience (more formal for exec /
client, more casual for team). Don't embellish — if my notes are thin
on a section, make it a short section rather than padding.
17. Email triage
You are my email assistant. Read the following emails and triage them.
Emails:
[PASTE EMAILS]
Output format:
For each email:
- Subject line / sender
- Urgency: today / this week / this month / no action needed
- Recommended action: reply / delegate / archive / flag for later
- If reply: 2-sentence suggested response
- If delegate: who to delegate to and what to say
## Summary
Top 3 emails that need my personal attention this morning.
Constraints: Default to archive if something doesn't clearly need my
input. I'd rather you err on the side of "no action" than "worth your
time."
Structural Patterns That Compound
Beyond the six components, three techniques reliably lift output quality on complex tasks.
Few-shot prompting: show examples
The single biggest quality lift for structured or stylistic tasks. Include 2–5 examples of input-output pairs matching the pattern you want, then ask for the new case.
When it helps most: classification tasks, format-specific outputs, brand voice matching, structured data extraction.
Chain of thought: ask for reasoning before the answer
Adding "Think step by step before answering" or "Walk through your reasoning, then give your final answer" significantly improves accuracy on math, logic, and multi-step tasks. The model uses the reasoning steps as scaffolding.
Use it for: strategic decisions, math, multi-step analysis, anything where "the answer" depends on reasoning paths.
Don't use it for: simple factual lookups, quick categorization, creative generation where you want speed.
Explicit output templates: specify the structure
Instead of hoping the model guesses the format you want, paste the output template directly into the prompt with placeholders. The model fills in the placeholders.
Example: Instead of "Summarize this meeting," paste:
## Decisions
[FILL IN]
## Action items
[FILL IN]
## Open questions
[FILL IN]
This is the same pattern we use in the Claude Project Starter Pack — structured output templates in the system prompt ensure every response is citable and scannable.
Common Mistakes That Kill Prompt Quality
How to Build Your Own Template Library
The teams that get the most value from AI don't rewrite prompts from scratch every time — they build a library of templates for recurring tasks. Here's the shortest path to having one:
- Identify 5 recurring tasks your team does weekly or more — reports, analyses, drafts, reviews.
- Write one structured prompt for each using the six-component framework. Save them in a shared location (Notion page, Google Doc, or a Claude Project).
- Test and iterate each template 3–5 times. Fix whatever consistently comes out wrong.
- Add few-shot examples to the 2–3 templates where tone or format matters most. Pull examples from real past outputs.
- Document when to use each one — a 1-line description per template so teammates grab the right one.
Teams that do this see AI output quality improve dramatically within a month. No special tools needed. Just deliberate structure.
If you want a more systematic version — your team's AI platform set up properly, with structured templates and context libraries baked in — that's the LLM Setup & Context Engineering service. The Claude Project Starter Pack is the free DIY version.
Want to apply this across your team's AI workspaces? The Claude Project Starter Pack has 6 role-specific starter configurations you can paste and customize. For a more systematic setup — your team's Claude, ChatGPT, or Gemini configured properly with context libraries and governance — see the LLM Setup service or book a discovery call.