- Learn
- Prompt Engineering
- Core Principles of Prompt Engineering
Master the five foundational principles that make AI prompts effective: clarity, specificity, context, constraints, and examples.
Core Principles of Prompt Engineering
The quality of AI output is directly tied to the quality of your input. Mastering these five core principles will dramatically improve your results with any AI coding tool.
The Golden Rule
"Show your prompt to a colleague with minimal context on the task and ask them to follow it. If they'd be confused, the AI will be too." — Anthropic Documentation
Think of AI as a brilliant but new employee who lacks context on your norms and workflows. The more precisely you explain what you want, the better the result.
Principle 1: Clarity
Be explicit about what you want. Ambiguous prompts force the AI to guess, often incorrectly.
Vague (Bad):
make a login form
Clear (Good):
Create a login form component in React with TypeScript:
- Email and password fields with labels
- Client-side validation (email format, password min 8 chars)
- Loading state during submission
- Error message display area
- Submit button that disables during loading
- Use Tailwind CSS for styling
Why Clarity Matters
When instructions are precise, AI models can interpret them without confusion. Each specific detail you add removes one more decision the AI has to make on its own.
Principle 2: Specificity
Vague prompts force AI to guess, leading to errors, extra iterations, and rework.
Always specify:
- Programming language and version
- Framework or library preferences
- Error handling expectations
- Output format requirements
- Performance requirements
Vague:
write a function to process data
Specific:
Write a TypeScript function that:
- Takes an array of user objects with properties: id, name, email, createdAt
- Filters users created in the last 30 days
- Sorts them by name alphabetically (case-insensitive)
- Returns only id and name properties
- Handle empty arrays gracefully
- Include JSDoc comments
Principle 3: Context
Help the AI understand not just what you want, but why and how it fits into your project.
Essential Context Elements
| Context Type | Examples |
|---|---|
| Tech Stack | "Using Next.js 14 with App Router" |
| Architecture | "This is a service layer function" |
| Conventions | "We use camelCase for functions" |
| Dependencies | "We're using Zod for validation" |
| Purpose | "This will be called by the checkout flow" |
Example with rich context:
Context: Building a SaaS dashboard using Next.js 14, TypeScript, and Supabase.
This component lives in the /components/analytics folder.
We follow the existing pattern of using React Query for data fetching.
Task: Create a chart component that displays user signups over time.
It should match the style of the existing MetricsCard component.
What NOT to Include
- Unrelated code or files
- Sensitive information (API keys, credentials)
- Entire codebases (overwhelming)
- Deprecated or unused code
Principle 4: Constraints
Constraints focus the AI's output and prevent unwanted behaviors. They define boundaries and limitations.
Types of Constraints:
| Type | Example |
|---|---|
| Must do | "Use async/await pattern" |
| Must not do | "Do NOT use any deprecated methods" |
| Should do | "Prefer composition over inheritance" |
| Formatting | "Return response as JSON" |
| Performance | "Must handle 10k+ items efficiently" |
Example with constraints:
Generate a REST API endpoint with the following constraints:
Must:
- Use Express.js with TypeScript
- Follow RESTful naming conventions
- Include input validation using Zod
- Return appropriate HTTP status codes
Must NOT:
- Use deprecated Express methods
- Expose internal error details to clients
- Allow unauthenticated access
Should:
- Include request logging
- Handle edge cases gracefully
Principle 5: Examples
Showing is more powerful than telling. Examples demonstrate exactly what you want, especially for formatting, style, and edge cases.
Few-Shot Pattern
Provide 2-3 examples to establish a pattern:
Convert API responses to our standard format.
Example 1:
Input: { user: "john", age: 25 }
Output: { success: true, data: { user: "john", age: 25 }, timestamp: "..." }
Example 2:
Input: null
Output: { success: false, error: "Invalid input", timestamp: "..." }
Now convert this:
Input: { product: "laptop", price: 999 }
When to Use Examples
- Specific output formats needed
- Custom conventions to follow
- Edge case handling
- Style consistency across outputs
Putting It All Together
Here's a prompt that uses all five principles:
[CONTEXT]
Building a user management API for our Next.js SaaS app.
Using TypeScript, Prisma with PostgreSQL, and Zod for validation.
This endpoint will be called from the admin dashboard.
[TASK]
Create an API route handler for updating user profiles.
[REQUIREMENTS]
- Accept PATCH requests to /api/users/[id]
- Validate: name (2-50 chars), email (valid format), role (admin|user|viewer)
- Only admins can change roles
- Return updated user object on success
[CONSTRAINTS]
- Must use Zod for validation
- Must check authentication via getServerSession
- Must NOT expose password field in response
- Should return appropriate HTTP status codes
[EXAMPLE RESPONSE]
Success (200):
{ "user": { "id": "...", "name": "...", "email": "...", "role": "..." } }
Validation Error (400):
{ "error": "Validation failed", "details": [...] }
Prompting Reasoning Models
Modern "thinking" models (Claude with extended thinking, GPT-5.4 Thinking) work differently than standard models. They reason internally before responding.
What Changes
| Standard Models | Reasoning Models |
|---|---|
| Need "think step by step" | Already think internally |
| Benefit from chain-of-thought prompts | May overthink simple tasks |
| Quick responses | Slower, more thorough |
| Good for simple tasks | Best for complex problems |
Best Practices for Reasoning Models
Do:
- Present complex multi-step problems
- Ask for architectural decisions
- Request analysis of trade-offs
- Use for debugging complex issues
Don't:
- Add unnecessary "think step by step" instructions
- Use for simple autocomplete tasks
- Expect faster responses
- Overcomplicate simple requests
When to Use Which
Simple task (autocomplete, formatting):
→ Standard model (faster, cheaper)
Complex task (architecture, debugging):
→ Reasoning model (more thorough)
Quick Reference Checklist
Before sending a prompt, verify:
- Clear: Could a new teammate understand this?
- Specific: Are language, framework, and requirements explicit?
- Contextual: Does AI know where this fits in the project?
- Constrained: Are boundaries and limitations defined?
- Exemplified: Would an example help clarify expectations?
- Model-appropriate: Is this the right model for the task complexity?
Practice Exercise
Take this vague prompt and improve it using all five principles:
Original:
make a component that shows user info
Think about:
- What kind of component? (React, Vue, etc.)
- What user info should it display?
- How should it be styled?
- What props does it need?
- What states should it handle?
Summary
- Clarity: Be explicit, remove ambiguity
- Specificity: Include all relevant details
- Context: Explain the why and where
- Constraints: Set clear boundaries
- Examples: Show the desired pattern
Master these principles and you'll see immediate improvement in AI-generated code quality.
Next Steps
Now that you understand the core principles, let's dive into specific techniques starting with few-shot prompting—one of the most powerful ways to guide AI output.