Skip to content

How to Describe a Bug to AI So It Gets Fixed

Knowing how to describe a bug to AI is the difference between a one-shot fix and a frustrating back-and-forth. AI coding tools like Cursor and Claude Code are remarkably good at fixing bugs — when they understand what is broken. The problem is that most developers describe bugs the same way they would to a human colleague: vaguely.

Why AI needs better bug descriptions

When you tell a colleague "the login button doesn't work," they can ask follow-up questions, look at your screen, check the logs, and eventually figure it out. An AI coding tool cannot do any of that. It gets one shot at understanding the problem based on what you give it.

According to a CodeRabbit study of 3 million+ PRs, AI-generated fixes that fail almost always trace back to insufficient context in the prompt — not to limitations in the model itself.

The pattern is consistent: vague input produces wrong fixes. Structured input produces correct fixes. The quality of your bug description directly determines whether the AI fixes the bug or introduces a new one.

The before and after of describing a bug to AI

Here is a real example showing how the same bug can be described two ways — and why the difference matters.

Bad: vague description

"The checkout page is broken. Users can't complete their purchase. Fix it."

This tells the AI almost nothing. Which part of checkout? What happens when they try? What should happen? The AI will guess — and probably guess wrong.

Good: structured description

"On the checkout page (/checkout), tapping 'Place Order' shows a spinner for 3 seconds then returns to the cart page without completing the order. Expected: order confirmation page. The Stripe payment intent is created successfully (confirmed in dashboard). The issue is in the order submission handler in src/features/checkout/submit-order.ts. Device: iPhone 15, iOS 18.2, Safari."

This gives the AI everything it needs: the specific behavior, the expected behavior, what works and what does not, the likely file, and the environment. First-try fixes become the norm.

Key insight: An AI coding tool is only as good as its prompt. A structured bug report is a prompt — treat it like one.

The five elements of an AI-ready bug report

Every bug description you send to an AI coding tool should include these five elements. Miss one and you risk a bad fix.

  • What happened — The exact behavior you observed. "Tapping Place Order shows a spinner for 3 seconds then redirects to cart." Not "checkout is broken."
  • What should happen — The expected behavior. "Should navigate to /order-confirmation with the order ID." The AI cannot fix what it does not know is wrong.
  • Steps to reproduce — The exact sequence. "1. Add item to cart. 2. Go to /checkout. 3. Enter test card. 4. Tap Place Order." Numbered steps, no ambiguity.
  • Environment context — Device, OS version, browser/app version, network conditions. Mobile bugs are often device-specific.
  • Code pointer — The file or function where the bug likely lives. "Check submit-order.ts lines 45-60." This is the single biggest time-saver for AI tools.

How to write a bug report for Cursor

Writing a bug report for Cursor requires a slightly different format than writing one for a human. Cursor works best with context-rich markdown that includes file paths and expected behavior.

Markdown
## Bug: Order submission fails silently on checkout

**Observed:** Tapping "Place Order" on /checkout shows spinner
for 3 seconds, then redirects to /cart. No error shown.

**Expected:** Navigate to /order-confirmation with order ID.

**Steps:**
1. Add any item to cart
2. Navigate to /checkout
3. Enter card: 4242 4242 4242 4242
4. Tap "Place Order"

**Context:**
- Stripe payment intent created successfully (confirmed in dashboard)
- Issue likely in src/features/checkout/submit-order.ts
- Only fails on mobile Safari (works in Chrome desktop)

**Environment:** iPhone 15, iOS 18.2, Safari 18.2

How to write a bug report for Claude Code

A bug report for Claude Code follows the same structure, but you can be more conversational since Claude Code handles longer context well. Include the reasoning about what you have already tried.

Markdown
The order submission on /checkout fails on mobile Safari.

When I tap "Place Order", the spinner shows for ~3 seconds,
then it redirects back to /cart without completing. No error
is displayed to the user.

The Stripe payment intent IS being created — I confirmed in
the Stripe dashboard. So the issue is after payment creation,
likely in the order completion handler.

Relevant file: src/features/checkout/submit-order.ts

I think the issue is in the handleOrderComplete function
around line 52 — it might be a race condition between the
Stripe webhook and the client-side redirect.

Environment: iPhone 15, iOS 18.2, Safari. Works fine on
Chrome desktop.

How clip.qa automates this entire process

Writing structured bug descriptions by hand works, but it is slow. Every report takes 5-10 minutes to draft properly. Multiply that by 10 bugs a day and you have lost nearly two hours just writing reports.

clip.qa automates the entire process. Record a screen on your phone, and the AI extracts all five elements automatically: what happened, what should happen, steps to reproduce, device context, and suggested code pointers. The output is already formatted for Cursor and Claude Code.

The AI bug report generator analyzes each frame of your recording, identifies user actions, detects anomalies, and produces a structured report in under 30 seconds. What used to take 10 minutes of manual writing takes one tap.

This is especially powerful for the vibe coding workflow — where bugs are produced faster than you can manually document them. The AI-to-AI pipeline (clip.qa generates the report, Cursor or Claude Code fixes the bug) closes the loop without a human writing a single line of the report.

Common mistakes when describing bugs to AI

Even experienced developers make these mistakes when feeding bug descriptions to Cursor or Claude Code. Each one reduces the chance of a first-try fix.

Describing symptoms instead of behavior

"The app feels slow" is a symptom. "The product list API call takes 4.2 seconds on 3G networks" is behavior. AI tools need measurable, specific observations.

Omitting what works

Telling the AI what does work is just as important as what does not. "Stripe payment intent created successfully" narrows the problem space dramatically.

Skipping the environment

A bug that only happens on mobile Safari will never be fixed if you do not mention the browser. Always include device, OS, and browser — especially for mobile bugs.

Key takeaways

  • AI coding tools need structured context to fix bugs — vague descriptions produce wrong fixes
  • Every AI-ready bug report needs: observed behavior, expected behavior, steps to reproduce, environment, and a code pointer
  • Format reports as markdown for Cursor; use conversational detail for Claude Code
  • clip.qa automates the entire process — record a screen and get a structured, LLM-ready bug report in seconds
  • The AI-to-AI workflow (clip.qa report → AI coding tool fix) closes the bug-to-fix loop in minutes
Share this post

Frequently asked questions

How do I describe a bug to AI so it gets fixed?

Include five elements: what happened (specific observed behavior), what should happen (expected behavior), numbered steps to reproduce, environment details (device, OS, browser), and a code pointer to the likely file. Structured markdown format works best for AI coding tools.

What is the best format for a Cursor bug report?

Use structured markdown with clear headings: Bug summary, Observed vs Expected behavior, Steps to reproduce, Context (including file paths), and Environment. Cursor works best with concise, context-rich descriptions that include specific file and line references.

Can AI fix bugs from a screen recording?

Yes. Tools like clip.qa analyze screen recordings to extract steps to reproduce, device context, and anomalies, then generate structured bug reports formatted for AI coding tools like Cursor and Claude Code.

Why does my AI coding tool give wrong fixes?

Almost always because the bug description lacks context. Vague descriptions like 'it is broken' force the AI to guess. Providing observed behavior, expected behavior, environment, and code pointers dramatically improves fix accuracy.

Try clip.qa — it does all of this automatically.

Record a screen. AI writes the report. Paste it into Claude or Cursor. Free to start.

Get clip.qa Free