AI writes more code. Critique makes it safe to merge.
Critique is a GitHub-native AI code review control plane. Try Critique Chat in the browser right now — no install needed. Wire in the GitHub App when you want automated PR review.
Chat live · GitHub App beta · JS/TS-first review · Remedy on roadmap
Repo-grounded chat in the browser — pick a model, ask questions, connect GitHub when you want full workspace context.
Reads the diff, related files, tests, and ownership map before it comments.
Spawns security, billing, and performance lanes only when the PR actually touches them.
Every agent writes to the same investigation layer so the final review compounds.
Remedy and BYOA stay on the roadmap. The current beta ships the review gate, evidence pack, and GitHub output first.
Ask anything
about the code.
Critique Chat is a multi-model, repo-aware chat interface that lives inside the product. No install, no waitlist — open it now and start asking questions about any public repo, or sign in to connect your own.
- 01Switch models per question — no config changes needed.
- 02Connect a GitHub repo and get file-grounded answers, not guesses.
- 03Sign in optional — start asking from the browser right now.
- 04No rate-limit on the free tier to start. Upgrade when you want more.
Code generation accelerated. Review quality did not.
AI can write features, boilerplate, tests, and migrations faster than ever. But merging faster code safely requires a deeper review layer — one that understands architecture, tests, security boundaries, and downstream impact.
Not a single model pass.
A coordinated review system.
Scout maps context, specialists investigate in parallel, and Lead Reviewer makes the final merge decision.
Scout
Maps files, dependencies, tests, call sites, impact zones.
Shared Board
Creates a live evidence and task layer for all agents.
Specialists in Parallel
Security, tests, architecture, performance, docs.
Lead Reviewer
Synthesises findings, removes noise, makes final verdict.
Remedy / BYOA
Turns the review into execution.
Agents shouldn't review in isolation.
Scout turns the PR into a shared investigation space. Tasks, evidence, context, and follow-up questions are posted to a common board so specialists can coordinate, not duplicate work.
One review system.
Multiple engineering surfaces.
From security and tests to architecture, performance, and autonomous fix — each surface is a clear module you can rely on.
Critique Chat
Try multi-model, repo-aware chat in the product — no install required to start. Connect GitHub when you want live code context and usage limits that match your plan.
Open ChatSecurity Review
Catches auth bypasses, permission gaps, secret exposure, unsafe data access, and boundary regressions.
Test Coverage Review
Finds missing tests, weakened assertions, untested pricing or billing paths, and regression risk.
Architecture Review
Flags layering violations, hidden dependency drift, incorrect abstractions, and risky structural changes.
Performance Review
Detects N+1 queries, repeated fetch patterns, wasteful loops, and scalability concerns.
Remedy
Turns findings into code changes, runs verification, and pushes fixes automatically.
Bring Your Own Agent
Send Critique's structured fix blueprint to Codex, Claude Code, Copilot, or any external agent workflow.
A simple animated line
from repo to review.
Scout takes the repo
It reads the diff, nearby files, and tests first, then decides what actually deserves deeper review.
A review UI that actually shows
how the system thinks, searches, and decides.
This example walks through the full flow: Scout reads the repo, specialists are created with named models, tool usage is exposed, and the final review stays technical enough to be acted on instead of being dismissed as generic AI commentary.
This is the real-time surface: Scout reads the repo, creates specialists only where the diff demands it, records tool usage, and waits for evidence before the lead model writes the review.
Reads the diff first, then builds the context envelope needed for specialist review.
Final synthesis blocks merge because the tenant boundary regression is externally reachable, the Stripe mutation can apply twice on retry, and the context graph builder now serializes 54 reads in the hot path.
Most tools stop at comments. Critique closes the loop.
When Critique finds a missing test, architecture violation, or logic flaw, Remedy can patch the code, verify the result, and push a working fix directly to the branch.
- 1Boot isolated sandbox
- 2Pull repo + findings
- 3Write patch
- 4Run tests, lint, build
- 5Push verified fix
Or hand the blueprint to your own coding agent.
Not all AI review is the same.
The difference is not whether a model can comment. It is whether the system understands the repo, coordinates evidence, and gives engineers something they can trust enough to merge or act on.
Typical AI PR reviewer
- —Reads the diff only
- —Single model output
- —Limited architectural context
- —Can comment, but not execute
- —Weak cost control
- —Little policy flexibility
Critique
- ✓Repository-aware scouting
- ✓Parallel specialist agents
- ✓Shared evidence coordination
- ✓Final lead reasoning layer
- ✓Autonomous fix or BYOA execution
- ✓Flexible model routing and credit control
Strict where it matters. Flexible where it doesn't.
- ·Require deeper review on auth or billing code
- ·Escalate security agents on protected directories
- ·Tune strictness by repo or branch
- ·Choose lead model and specialist stack (Standard & Pro: same catalog)
- ·Route routine PRs to lighter models; save frontier models for when it matters
- ·Ultra: GPT-5.2 Pro, GPT-5.4 Pro, Claude Opus 4.6, and any lead as a sub-agent
Credits follow the work — not a single flat rate.
Specialist sub-agents handle narrow inspection tasks; the lead model synthesises the verdict. Standard and Pro ship the same selectable catalog — Ultra adds GPT-5.2 Pro, GPT-5.4 Pro, and Claude Opus 4.6 — so you scale with credits instead of surprise routing.
Built for teams shipping in the age of generated code.
AI-heavy product teams
Review the flood of generated code with more than a single diff pass.
Engineering leads
Catch structural regressions, missing tests, and security drift before merge.
Startups moving fast
Add deep review without building internal agent infrastructure.
Teams with existing coding agents
Keep Codex, Claude Code, or Copilot for execution and let Critique own review quality.
Standard and Pro share one catalog; Ultra adds GPT-5.2 Pro, GPT-5.4 Pro, and Claude Opus 4.6 — or let Critique route for you.
Every component you trust, already integrated.
System credibility
Simple, transparent
pricing.
- — 500 credits / month
- — Full lead & specialist catalog (same as Pro)
- — GitHub check runs
- — Dashboard access
- — 2,000 credits / month
- — Same model catalog as Standard
- — Fix proposal agent
- — 7-day free trial
- — 10,000 credits / month
- — GPT-5.2 Pro, GPT-5.4 Pro, Claude Opus 4.6 (Ultra-only)
- — Same lead ↔ sub flexibility as Standard / Pro, plus frontier models
- — Org-wide tooling
- — Priority support
Start reviewing before code ships.
Critique Chat is in the product today. GitHub installs, review runs, and policy controls ship alongside it; deeper repair automation stays on the roadmap.