๐
March 11, 2026
โฑ 12 min read
๐ท Claude Code
50 Claude Code Prompts That Actually Work
(With Real Examples)
Most Claude Code prompts are vague and get vague results. This is 50 specific, copy-paste-ready prompts โ for debugging, refactoring, testing, architecture, and code review โ that developers on our team actually use every day.
Why Most Claude Code Prompts Fail
There's a reason developers get inconsistent results from Claude Code. It's not the model. It's the prompt structure.
When you write "fix this function" or "make this code better", you're asking Claude to guess your intent, your constraints, your codebase conventions, and your definition of "better." It can't. So it produces something generic.
The prompts below work because they provide four things:
- Context: What the code is supposed to do
- Constraint: What you can't or don't want to change
- Goal: What a good output looks like
- Format: How you want the answer structured
You don't need all four every time โ but the more you include, the tighter the output.
Pro tip
Add a project-level CLAUDE.md file in your repo root with your stack, code style, and naming conventions. Claude Code reads it on every session and you get pre-baked context without re-typing it every time.
๐ Debugging Prompts
Use these when something is broken and you need a diagnosis, not just a patch.
#01 Explain the bug before fixing it debug
Before you fix anything, explain in plain English exactly why this code fails. What is the root cause โ not just the symptom? Then, once I confirm your diagnosis, propose a fix.
[paste your buggy code here]
#02 Trace a specific error message debug
I'm getting this error: [paste error message and stack trace]
Walk me through what each line of the stack trace means. Which line is the actual bug vs. a downstream symptom? Then fix only the root cause.
#03 Find race conditions debug
This code runs in a multi-threaded / async context. Identify every possible race condition, explain when each one could occur, and rewrite the affected sections to be thread-safe. Don't change any behavior or interfaces โ only address concurrency issues.
[paste code]
#04 Debug intermittent failures debug
This test passes 80% of the time and fails 20% of the time. The failure is non-deterministic. List every possible cause of a non-deterministic failure in this code โ timing, state leakage, external dependencies, random values โ and suggest a fix for each.
[paste test + code under test]
#05 Debug wrong output (not an error) debug
This code runs without errors but produces wrong output. Expected: [X]. Actual: [Y]. Trace the execution step by step from input to output and identify exactly where the value diverges from what I expect.
[paste code + input example]
#06 Find memory leaks debug
Audit this code for memory leaks. Look for: unreleased resources, event listeners that are never removed, closures that hold references longer than needed, and unbounded caches or queues. Flag each issue with a severity (high/medium/low) and a one-line fix.
[paste code]
#07 Reproduce a production bug debug
I'm seeing this bug in production but can't reproduce it locally. The production environment has [describe differences: OS, Node version, environment variables, traffic patterns, etc.]. Write a minimal reproduction case that isolates this bug. What conditions are required to trigger it?
[paste error + relevant code]
#08 Validate a fix without breaking changes debug
Here is my proposed fix for [describe bug]. Before I commit, tell me: (1) Does this actually fix the root cause or just mask the symptom? (2) What edge cases could this fix break? (3) What regression tests should I add?
[paste original code + proposed fix]
#09 Debug SQL query performance debug
This query is taking [X seconds] on a table with [Y rows]. The relevant indexes are [list them]. Explain why it's slow, identify the most expensive operation in the execution plan, and rewrite it to be faster without changing what it returns.
[paste SQL query]
#10 Diff-based debugging debug
This code worked in version A and breaks in version B. Here is the diff. Identify which specific change introduced the bug and explain why that change causes the failure.
[paste git diff or both versions]
๐ง Refactoring Prompts
Use these to improve existing code without changing behavior โ preserving tests and interfaces.
#11 Refactor without breaking tests refactor
Refactor this code to improve readability and reduce duplication. Hard constraints: (1) all existing tests must still pass, (2) all public interfaces must remain unchanged, (3) do not add any new dependencies. Explain each change you made and why.
[paste code]
#12 Extract a clean service layer refactor
This function is doing too much. It handles [X], [Y], and [Z]. Extract each concern into a separate, single-responsibility function or class. The outer function should just coordinate between them. Keep the external API identical.
[paste function]
#13 Replace magic numbers and strings refactor
Find every magic number and hardcoded string in this codebase. For each one, suggest a named constant or enum value that makes the intent clear. Group related constants into a config object or constants file. Show me the before and after.
[paste code]
#14 Convert callback hell to async/await refactor
Rewrite this callback-based code using async/await. Preserve all error handling โ don't lose any catch paths. Add proper try/catch blocks. Keep the same logical flow but make it readable from top to bottom.
[paste code]
#15 Reduce cyclomatic complexity refactor
This function has too many nested conditionals. Refactor it to reduce cyclomatic complexity. Use early returns, guard clauses, and strategy patterns where appropriate. Target: no nesting deeper than 2 levels. Don't change the observable behavior.
[paste function]
#16 Make code testable refactor
This code is hard to unit test because it has tight coupling, side effects, and global state. Refactor it to be testable: inject dependencies, isolate side effects, and eliminate global state. Show me the refactored version and explain what changed.
[paste code]
#17 Enforce a style guide refactor
Rewrite this code to match our style guide: [describe your conventions โ naming, file structure, error handling patterns, comment style]. Only change style โ don't alter logic or behavior. Highlight every change with an inline comment explaining which rule it follows.
[paste code]
#18 Optimize a hot path refactor
This function is called [X times per second] in production and is a performance bottleneck. Optimize it for speed. Do not sacrifice readability more than necessary. List each optimization and its expected impact. Do not change the return type or public signature.
[paste function]
#19 Consolidate duplicate logic refactor
Here are [N] functions that do similar things. Identify the common pattern, extract it into a shared utility, and rewrite each function to use it. The result should have no logic duplication. Keep each function's existing interface.
[paste functions]
#20 Add TypeScript types to JS code refactor
Convert this JavaScript to TypeScript. Add strict types to all function signatures, variables, and return types. Use interfaces for complex objects. Do not use `any` unless there is genuinely no better option โ and flag any `any` you do use with a comment explaining why.
[paste JS code]
๐งช Testing Prompts
Use these to generate tests that actually catch bugs โ not just tests that pass.
#21 Write tests for unhappy paths only test
Write unit tests ONLY for the unhappy paths of this function: null inputs, invalid types, boundary values, empty arrays, negative numbers, network failures, timeout scenarios. Don't write tests for the happy path โ I have those. Use [Jest/Vitest/pytest โ specify yours].
[paste function]
#22 Generate property-based tests test
Write property-based tests for this function using [fast-check / Hypothesis / specify your library]. Identify the invariants that should always hold regardless of input. Generate tests that try to break each invariant.
[paste function]
#23 Test a function with external dependencies test
Write unit tests for this function. It depends on [database / HTTP client / file system โ specify]. Mock all external dependencies. Test that: (1) it calls the dependency with the right arguments, (2) it handles success responses correctly, (3) it handles error responses correctly, (4) it handles timeouts.
[paste function]
#24 Find untested edge cases test
Here is the function and its current test suite. What edge cases are NOT covered by the existing tests? For each gap, write the test that covers it. Focus on the edge cases most likely to cause production bugs.
[paste function + existing tests]
#25 Write integration test scenarios test
Write integration test scenarios (not implementation) for this user flow: [describe the flow]. For each scenario, describe: the preconditions, the actions, the expected outcomes, and what should NOT happen. I'll implement the tests โ just give me the scenarios as a numbered list.
[paste relevant code/API]
#26 Improve test names test
These test names are vague and don't describe what they're actually testing. Rename each test to follow the "should [do X] when [condition Y]" pattern. Don't change any test logic โ only the description strings. Also flag any tests whose names don't match what they actually test.
[paste test file]
#27 Add test for a specific bug fix test
I just fixed this bug: [describe the bug]. Write a regression test that would have caught this bug before the fix. The test should fail against the old code and pass against the new code. Include a comment explaining what scenario the test prevents.
[paste old + new code]
#28 Generate test data factories test
Create test data factories for these domain objects. Each factory should: (1) have sensible defaults for all required fields, (2) accept an override object for specific field values, (3) generate unique IDs automatically, (4) have builder methods for common test scenarios (e.g., createExpiredUser(), createAdminUser()).
[paste your type/interface definitions]
#29 Write snapshot test replacements test
These snapshot tests are brittle โ they fail on every minor UI change and nobody reviews the diffs. Replace each snapshot test with explicit assertion-based tests that check the meaningful behavior: text content, presence of key elements, event handlers. Don't test implementation details.
[paste snapshot test file]
#30 Generate test coverage report interpretation test
Here is my test coverage report. Which uncovered lines represent the highest risk if not tested? Rank the top 10 uncovered code paths by risk (likelihood ร impact of a bug there). For each, explain why it's risky and write the test that covers it.
[paste coverage output]
Want 200+ more prompts like these?
The Ask Patrick Prompt Library has 200+ prompts across 8 categories โ debugging, architecture, documentation, PR reviews, migrations, and more. Organized by use case, copy-paste ready, with context notes on when to use each.
Get the Prompt Library โ $29 โ
๐ Architecture Prompts
Use these for design decisions, greenfield work, and evaluating tradeoffs before you write code.
#31 Design a system from requirements arch
Design a system that does the following: [describe requirements]. My constraints are: [list constraints โ team size, budget, existing stack, SLA requirements]. Give me: (1) a high-level component diagram in text, (2) the key design decisions and why, (3) the top 3 risks and how to mitigate them. Do not write any code yet.
#32 Choose between two approaches arch
I need to [describe goal]. I'm considering: Option A: [describe]. Option B: [describe]. My context: [team size, scale, timeline, existing stack]. Give me a structured comparison: performance, maintainability, cost, operational complexity, and time to build. Make a recommendation and defend it.
#33 Review architecture for scaling bottlenecks arch
Here is our current architecture. We're expecting [X users / Y requests per second / Z data volume] in 6 months. Where will this architecture break first? Rank the top 5 bottlenecks by severity. For each, describe the failure mode and the minimum viable fix.
[paste architecture description or diagram]
#34 Design a database schema arch
Design a database schema for [describe domain]. Requirements: [list them]. I'm using [PostgreSQL/MySQL/MongoDB โ specify]. Provide: the table/collection definitions, indexes, foreign keys, and an explanation of every non-obvious design decision. Flag any places where you made a tradeoff.
#35 Plan a migration strategy arch
I need to migrate from [old system/approach] to [new system/approach]. We have [X users / Y records]. The system must stay live during migration โ no downtime. Design a migration strategy: phases, rollback plan, data validation steps, and the order of operations. Include what can go wrong at each phase.
#36 API design review arch
Review this API design before I implement it. Evaluate: naming consistency, resource hierarchy, HTTP verb usage, error response format, versioning strategy, and pagination. Flag anything that will be painful for API consumers in 12 months. Suggest specific improvements.
[paste API spec or route list]
#37 Evaluate a proposed architecture arch
A colleague is proposing this architecture for our new service: [describe proposal]. Play devil's advocate. What are the 5 hardest questions someone should ask before approving this? What edge cases does this design not handle? What would you change?
#38 Document architecture decisions arch
Write an Architecture Decision Record (ADR) for this decision: [describe decision]. Use the standard format: Title, Status, Context, Decision, Consequences (positive and negative). Be honest about the downsides โ don't write a sales pitch.
#39 Design for failure arch
Here is a feature I'm building. List every way this feature can fail โ not just code bugs, but: network failures, downstream API outages, database locks, user error, malformed input, concurrent writes, and infrastructure failures. For each, tell me: can we prevent it, detect it, or only recover from it?
[paste feature description or code]
#40 Evaluate a third-party dependency arch
I'm considering adding [library/service] as a dependency for [purpose]. Evaluate: maintenance health, security track record, bundle size impact, license compatibility with [license type], API stability, and migration cost if we need to replace it in 2 years. Give me a go/no-go recommendation.
๐ Code Review Prompts
Use these to give better reviews, request better reviews, and prepare code for review.
#41 Give a senior-engineer code review review
Review this code as a senior engineer who cares about: correctness, security, performance, and maintainability โ in that order. Don't nitpick style. Flag only issues that would cause a bug, a security vulnerability, or significant tech debt. Be direct. Rate each issue: blocking / important / minor.
[paste code]
#42 Security-focused review review
Review this code with a security mindset. Look for: injection vulnerabilities, authentication/authorization gaps, insecure data handling, unvalidated inputs, hardcoded secrets, insecure direct object references, and any place where user input touches a system boundary. Cite the relevant OWASP category for each finding.
[paste code]
#43 Prepare code for review review
I'm about to submit this code for review. Before I do, tell me: (1) What questions will the reviewer definitely ask? (2) What will they likely push back on? (3) What should I add to the PR description to pre-empt those questions? I want to make this a fast, clean review cycle.
[paste code + brief description of what it does]
#44 Write a PR description review
Write a pull request description for this change. Include: what problem this solves (1-2 sentences), what changed and why (not just what โ the why matters), what the reviewer should focus on, how to test it, and any risks or known limitations. Use plain prose, not just a bullet list.
[paste diff or code summary]
#45 Explain code to a reviewer review
Write inline comments for the non-obvious parts of this code. A comment should explain WHY โ the intent, the non-obvious assumption, or the gotcha โ not WHAT the code does (the code shows that). Skip comments on obvious lines. Target: a mid-level engineer should understand every decision after reading the comments.
[paste code]
#46 Review a code review review
Here is a code review comment I received: "[paste review comment]". Help me: (1) understand if this feedback is valid, (2) formulate a professional response if I disagree, (3) write the code change if I agree. Keep the tone collaborative, not defensive.
#47 Spot what the test missed review
Here is a bug that made it to production. Here is the PR that introduced it. Why didn't the code review catch it? What review checklist item, test, or tool would have caught it? What should we add to our review process to prevent this class of bug in the future?
[paste bug description + original PR code]
#48 Review for backward compatibility review
This change is going into a public API / shared library / database schema. Review it specifically for backward compatibility. What existing behavior could this break? What client code that currently works might fail after this change? Grade the change: fully backward compatible / soft-breaking / hard-breaking.
[paste change]
#49 Generate a review checklist review
Generate a code review checklist for this specific type of change: [describe โ e.g., database migration, API endpoint, authentication flow, payment processing]. Make it specific to what can go wrong in THIS type of code, not a generic checklist. 10-15 items, ordered by risk.
#50 Summarize a large diff review
Summarize this diff for a team member who needs to understand what changed without reading every line. Structure it as: (1) the problem being solved, (2) the approach taken, (3) the 3 most important changes to understand, (4) anything that could have side effects they should test. Max 200 words.
[paste diff]
Every strong Claude Code prompt has some version of this structure:
The CCRG Formula
Context: What does the code do, and what is it supposed to do?
Constraint: What can you NOT change? (API, tests, behavior)
Resolution: What does a good output look like?
Grade: Ask Claude to rate its own confidence and flag uncertainties.
Example: "This function handles payment retries (context). Don't change the retry interval logic (constraint). Refactor only for readability โ all tests must pass (resolution). Flag anything you're unsure about (grade)."
The single highest-leverage addition? The grade step. Ask Claude to flag where it made assumptions. You'll catch edge cases it papered over, and you'll know exactly where to double-check.
What's in the Full Prompt Library
These 50 are a starting point. The Ask Patrick Prompt Library has 200+ prompts across 8 categories:
| Category | Prompts | What You'll Use Them For |
| Debugging | 30 | Root cause analysis, race conditions, performance |
| Refactoring | 30 | Clean code, patterns, safe rewrites |
| Testing | 25 | Coverage gaps, test data, integration scenarios |
| Architecture | 25 | Design reviews, tradeoffs, scaling decisions |
| Code Review | 25 | Better reviews, PR prep, security audits |
| Documentation | 20 | Inline comments, READMEs, runbooks |
| Migrations | 25 | Database, API, framework upgrades |
| Onboarding | 20 | Explaining unfamiliar code, context building |
Every prompt includes: when to use it, what context to paste, and what to do if the output is too generic. One-time purchase, instant PDF download.
The Ask Patrick Prompt Library
200+ copy-paste-ready prompts. 8 categories. Includes context notes and usage tips.
Used by dev teams at companies training on Claude Code.
Get the Prompt Library โ $29 โ
Related Guides