# How to Do QA for AI-Built Apps: A Practical Guide

undefined

## Why AI-built apps break differently

LLM-generated code is statistically likely to work for the happy path and fail at edge cases. The model optimizes for code that looks correct and follows common patterns. This means AI-built apps tend to have specific failure signatures:

## The "chaos walk" testing technique

For AI-built apps, the most effective manual testing technique is the chaos walk: navigate through the app in unexpected ways, enter unexpected inputs, and trigger flows the happy path doesn't cover. Document everything with a screen recording. The goal is to find the edges the LLM didn't predict.

## Prioritizing what to test

You can't test everything in a vibe-coded app. Prioritize by risk:

## Using clip.qa to capture AI-built app bugs

clip.qa is particularly well-suited for AI-built app QA because its output feeds directly back into the AI that built the app. When you find a bug, clip.qa's AI generates a prompt you can paste into Claude Code or Cursor — closing the loop without leaving the AI-assisted development workflow.

## Feedback loop: from bug to fix without leaving AI tools

The ideal workflow for vibe-coded app QA:

## What to watch for in AI-generated fixes

When Claude or Cursor fixes a bug based on a clip.qa report, verify:

