# AI-Powered Bug Reports

**Turn screen recordings into structured, LLM-ready bug reports.**

## How It Works

1. Record a video of the bug on your device
2. clip.qa's AI analyzes the video and generates a structured report
3. The report is formatted as markdown, ready for your AI coding tool
4. Paste into Claude, Cursor, or Copilot — the AI writes the fix

## What the AI Understands

- **Visual context** — What's on screen, what changed, what went wrong
- **Navigation steps** — How you got to the bug (taps, inputs, navigation)
- **Error states** — Crashes, error messages, unexpected behavior
- **Device state** — OS, app version, network status (when relevant)
- **Scope** — Is this a UI bug, logic bug, performance issue, or crash?

## Report Structure

Generated reports include:

- **Title** — Clear, concise bug title
- **Severity** — Critical, High, Medium, Low (AI-inferred from context)
- **Steps to reproduce** — Exact steps shown in the video
- **Expected vs. actual** — What should happen vs. what does
- **Device/app context** — OS version, app version, relevant state
- **Additional notes** — AI observations about patterns or related issues

## Export Formats

Reports can be exported as:

- **Markdown** — Paste directly into Claude, Cursor, or other LLM
- **Jira tickets** — Auto-formatted for Linear or Jira APIs
- **Slack messages** — Share with your team
- **JSON** — Programmatic access for automation

## Accuracy

AI-generated reports achieve 90%+ accuracy on common bug patterns:

- Crashes and errors
- UI layout issues
- Navigation bugs
- Form validation problems
- Performance issues

For complex or multi-step bugs, the video context helps AI understand intent even when the exact steps are unclear.

## Privacy

Your videos are analyzed on-device or via encrypted channels. Videos are never stored or used for training unless you explicitly share them.
