Prompt Engineering Patterns: A Comprehensive Guide
Master essential prompt engineering patterns for building reliable LLM applications with practical examples and production-ready techniques.
TL;DR
Prompt engineering patterns are reusable techniques for crafting effective LLM prompts. Master Chain of Thought for reasoning, Few-Shot for consistency, and Output Structuring for reliable parsing.
When to Use Prompt Engineering Patterns
Prompt engineering patterns are essential when:
- Building production LLM applications that need consistent outputs
- Solving complex reasoning tasks step-by-step
- Extracting structured data from unstructured text
- Improving accuracy and reliability of model responses
Pattern 1: Chain of Thought (CoT)
Guide the model to reason step-by-step before providing an answer:
def chain_of_thought_prompt(question: str) -> str:
"""
Create a Chain of Thought prompt for complex reasoning.
Forces the model to show its work before answering.
"""
return f"""Solve this problem step by step.
Question: {question}
Let's approach this systematically:
1. First, identify what we know
2. Then, determine what we need to find
3. Finally, work through the solution
Reasoning:"""
# Example usage
question = "If a store offers 20% off, then an additional 15% off the sale price, what's the total discount?"
prompt = chain_of_thought_prompt(question)
# Model output will show: 20% off leaves 80%, then 15% off 80% = 68%, so total discount is 32%Pattern 2: Few-Shot Learning
Provide examples to establish the expected format and behavior:
def few_shot_classifier(text: str, examples: list[dict]) -> str:
"""
Use few-shot examples for consistent classification.
Examples should cover edge cases and desired format.
"""
example_str = "\n\n".join([
f"Text: {ex['text']}\nCategory: {ex['category']}"
for ex in examples
])
return f"""Classify the following text into a category.
{example_str}
Text: {text}
Category:"""
# Example
examples = [
{"text": "The GPU crashed during training", "category": "hardware"},
{"text": "Model accuracy dropped after update", "category": "model"},
{"text": "API rate limit exceeded", "category": "infrastructure"},
]
prompt = few_shot_classifier("CUDA out of memory error", examples)
# Output: hardwarePattern 3: Output Structuring with JSON
Force structured outputs for reliable downstream parsing:
import json
from typing import TypedDict
class ExtractedEntity(TypedDict):
name: str
type: str
confidence: float
def structured_extraction_prompt(text: str) -> str:
"""
Extract structured data from text with JSON output.
Include schema in prompt for better compliance.
"""
return f"""Extract entities from the following text and return as JSON.
Text: {text}
Return a JSON array with this exact schema:
[
{{
"name": "entity name",
"type": "person|organization|location|product",
"confidence": 0.0-1.0
}}
]
JSON Output:"""
# Example
text = "Microsoft CEO Satya Nadella announced new AI features in Seattle."
prompt = structured_extraction_prompt(text)
# Output: [{"name": "Microsoft", "type": "organization", "confidence": 0.95}, ...]Remember: Always validate structured outputs with a JSON parser. LLMs can occasionally produce malformed JSON, so implement retry logic with error handling in production.
Pattern 4: System Role Definition
Set clear boundaries and behaviors using system prompts:
def create_system_prompt(role: str, constraints: list[str]) -> dict:
"""
Create a well-defined system prompt for consistent behavior.
Constraints help prevent unwanted outputs.
"""
constraints_str = "\n".join(f"- {c}" for c in constraints)
return {
"role": "system",
"content": f"""You are a {role}.
Your constraints:
{constraints_str}
Always stay in character and follow these rules strictly."""
}
# Example
system = create_system_prompt(
role="senior Python code reviewer",
constraints=[
"Only review code, don't write new code",
"Focus on security, performance, and readability",
"Be constructive and specific",
"Rate severity: low, medium, high, critical"
]
)Common Mistakes to Avoid
- Vague instructions: Be specific about format, length, and style
- Missing examples: Few-shot learning dramatically improves consistency
- No error handling: LLM outputs can be unpredictable—always validate
- Ignoring temperature: Lower temperature (0.0-0.3) for structured tasks
Advanced Techniques to Explore
- Self-Consistency: Run multiple CoT paths and vote on the answer
- ReAct Pattern: Combine reasoning with tool/action execution
- Tree of Thoughts: Explore multiple reasoning branches in parallel
- Retrieval Augmented Generation (RAG): Ground responses in external knowledge
Conclusion
Mastering these prompt engineering patterns will significantly improve your LLM applications. Start with Chain of Thought for reasoning tasks, use Few-Shot for consistency, and always structure outputs for production reliability!