Beginner15 min

Learn to recognize and avoid the most common prompt engineering mistakes that lead to poor AI output.

Common Prompt Engineering Mistakes

Even experienced developers make prompt engineering mistakes. Learning to recognize these patterns helps you write better prompts and debug issues when AI output falls short.

Mistake 1: Vague Instructions

The Problem

Giving instructions that are open to interpretation forces the AI to guess what you want.

Bad:

Terminal
make it better
Terminal
fix the bug
Terminal
clean up this code

The Fix

Be specific about what "better," "fixed," or "clean" means to you.

Good:

Terminal
Refactor this function to:
1. Reduce cyclomatic complexity from 15 to under 5
2. Extract the validation logic into a separate function
3. Add TypeScript types to all parameters
4. Follow our naming convention (camelCase for functions)

Mistake 2: Missing Context

The Problem

Assuming the AI knows your project, conventions, or requirements.

Bad:

Terminal
Add the login endpoint

(What framework? What auth method? What response format?)

The Fix

Provide relevant context explicitly.

Good:

Terminal
Add a login endpoint to our Express.js API.

Context:
- We use TypeScript
- Authentication via JWT
- Passwords hashed with bcrypt
- Response format: { success: boolean, data?: T, error?: string }
- Validation using Zod

Endpoint requirements:
- POST /api/auth/login
- Accept email and password
- Return JWT token on success
- Return appropriate error messages on failure

Mistake 3: Conflicting Requirements

The Problem

Giving instructions that contradict each other.

Bad:

Terminal
Write a simple function that handles all edge cases comprehensively.
Terminal
Make it fast but also readable and also fully documented.

The Fix

Prioritize requirements and acknowledge trade-offs.

Good:

Terminal
Write a function with these priorities (in order):
1. Correctness - handle the main use cases correctly
2. Readability - clear naming and structure
3. Edge cases - handle null/undefined inputs

Note: Don't over-optimize for performance at the cost of readability.

Mistake 4: Too Much at Once

The Problem

Asking for multiple unrelated things in a single prompt.

Bad:

Terminal
Create a user registration form, set up the database schema,
write the API endpoints, add validation, create tests, and
document everything.

The Fix

Break complex tasks into focused steps.

Good:

Terminal
Let's build user registration in steps.

Step 1 (this prompt): Create the database schema for users.

Requirements:
- Store email, password hash, name, created_at
- Include indexes for email lookups
- Use PostgreSQL with Prisma

After this, we'll handle the API endpoint separately.

Mistake 5: Not Specifying Output Format

The Problem

Leaving the output format undefined leads to inconsistent results.

Bad:

Terminal
Give me some validation for this form.

The Fix

Explicitly state the expected format.

Good:

Terminal
Create validation rules for this form.

Return as:
1. A Zod schema object
2. TypeScript type inferred from the schema
3. Example validation error messages

Format the response with clear code blocks for each part.

Mistake 6: Assuming AI Remembers Context

The Problem

In new conversations, assuming the AI remembers previous discussions.

Bad:

Terminal
Update the function we discussed earlier.

The Fix

Always include relevant context, even if you've mentioned it before.

Good:

Terminal
Here's the calculateTotal function from our earlier discussion:

```typescript
function calculateTotal(items: Item[]): number {
  // current implementation
}

Please update it to also apply tax based on the user's region.

Terminal

## Mistake 7: Not Providing Examples

### The Problem

Expecting AI to match your style without showing what that style looks like.

**Bad**:

Write tests in our style.

Terminal

### The Fix

Show examples of the expected style.

**Good**:

Write tests following our existing style:

Terminal
describe('UserService', () => {
  describe('createUser', () => {
    it('should create a user with valid data', async () => {
      // Arrange
      const userData = createMockUser();

      // Act
      const result = await userService.createUser(userData);

      // Assert
      expect(result).toMatchObject({
        id: expect.any(String),
        email: userData.email,
      });
    });
  });
});

Follow this pattern:

  • Use describe blocks for class and method
  • Use Arrange/Act/Assert comments
  • Use factory functions for test data
Terminal

## Mistake 8: Ignoring AI Suggestions

### The Problem

Blindly accepting or rejecting AI output without evaluation.

### The Fix

Treat AI output as a starting point:
- **Review** for correctness and fit
- **Test** generated code
- **Iterate** with feedback when needed

That's close, but please adjust:

  • The function should throw on invalid input, not return null
  • Use our custom HttpError class instead of generic Error
  • Add the missing JSDoc for the @throws annotation
Terminal

## Mistake 9: Over-Engineering Prompts

### The Problem

Adding unnecessary complexity to prompts.

**Bad**:

As an AI language model trained on code, leveraging your neural network's capacity for pattern recognition and your training on millions of code repositories, please utilize your transformer architecture to generate a function that adds two numbers.

Terminal

### The Fix

Keep prompts simple and direct.

**Good**:

Write a TypeScript function that adds two numbers. Include input validation and return type annotation.

Terminal

## Mistake 10: Not Iterating

### The Problem

Expecting perfect results on the first try and giving up when they're not.

### The Fix

Use iterative refinement:

**First attempt**:

Create a debounce utility function.

Terminal

**Refinement 1**:

Good start. Please add:

  • TypeScript generics for proper typing
  • Support for leading/trailing edge options
  • Proper cleanup for React useEffect usage
Terminal

**Refinement 2**:

Almost there. The cleanup function should also handle the case where the component unmounts mid-debounce. Add a cancel method.

Terminal

## Mistake 11: Wrong Level of Detail

### The Problem

Either too little detail (AI guesses) or too much detail (AI ignores parts).

**Too little**:

Make a button.

Terminal

**Too much** (200 lines of requirements for a simple button):

Create a button that when rendered initializes with a DOM element that has the following attributes set via the setAttribute method of the Element prototype...

Terminal

### The Fix

Match detail level to task complexity.

**Just right**:

Create a React button component with:

  • Variants: primary, secondary, ghost
  • Sizes: sm, md, lg
  • Loading state with spinner
  • Disabled state
  • Uses Tailwind CSS

Follow our existing component patterns (functional, forwardRef).

Terminal

## Quick Reference: Do's and Don'ts

| Don't | Do |
|-------|-----|
| "Fix this" | "Fix the null reference error on line 23" |
| "Make it better" | "Improve readability by extracting functions" |
| "Write tests" | "Write Jest unit tests covering happy path and error cases" |
| "Add features" | "Add pagination with 20 items per page, next/prev buttons" |
| Dump entire codebase | Include only relevant files and sections |
| Expect perfection | Iterate and refine |

## Debugging Poor Results

When you get poor output, check:

1. **Clarity** - Is your request unambiguous?
2. **Context** - Does AI have enough information?
3. **Format** - Did you specify expected output?
4. **Examples** - Would an example help?
5. **Scope** - Are you asking for too much at once?

## Practice Exercise

Identify the mistakes in these prompts and rewrite them:

1. `"make it work"`
2. `"add error handling like we always do"`
3. `"create the entire authentication system with all features"`
4. `"fix the bug in the code"`

For each, explain what's wrong and provide a corrected version.

## Summary

- Be specific, not vague
- Provide necessary context
- Avoid conflicting requirements
- Break complex tasks into steps
- Specify output format
- Include examples when style matters
- Iterate based on results

## Next Steps

You've now learned the core principles, techniques, and common pitfalls of prompt engineering. In the next module, we'll apply these skills to real-world development workflows.
Mark this lesson as complete to track your progress