Anthropic 推出了「Code Review」功能,這是一個為 Claude Code 設計的自動代碼審查系統
AI 語音朗讀 · Edge TTS
Anthropic 推出了「Code Review」功能,這是一個為 Claude Code 設計的自動代碼審查系統。該功能在 PR 開啟時調度多個代理並行搜尋漏洞,並驗證結果以減少誤報,最後按嚴重程度排序。審查結果以一條高信號摘要評論加上內聯標註的形式呈現。
內部測試成果 Anthropic 內部測試數月後取得顯著成果。實質性審查評論比例從 16% 提升至 54%,工程師標記為不正確的發現不到 1%。在 1000 行以上的大型 PR 中,84% 會發現問題,平均每個發現 7.5 個問題。單行改動也曾被系統發現會破壞關鍵認證功能的缺陷。
費用與計劃 Code Review 優化於深度分析而非速度,成本相對較高。審查按 token 用量計費,通常平均費用為 15 至 25 美元,會根據 PR 複雜度調整。系統為 Team 和 Enterprise 計劃提供研究預覽版本,管理員可設置月度支出上限、控制特定倉庫的使用權限並追蹤分析數據。該功能自動運行,無需開發者配置。
Introducing Code Review, a new feature for Claude Code.
— Claude (@claudeai) March 9, 2026
When a PR opens, Claude dispatches a team of agents to hunt for bugs. pic.twitter.com/AL2J4efxPw
Agents search for bugs in parallel, verify each bug to reduce false positives, and rank bugs by severity.
— Claude (@claudeai) March 9, 2026
You get one high-signal summary comment plus inline flags.
We've been running this on most PRs at Anthropic. Results after months of testing:
— Claude (@claudeai) March 9, 2026
PRs w/ substantive review comments went from 16% → 54%
<1% of review findings are marked incorrect by engineers
On large PRs (1,000+ lines), 84% surface findings, avg 7.5 issues each
Code Review optimizes for depth and may be more expensive than other solutions, like our open source GitHub Action.
— Claude (@claudeai) March 9, 2026
Reviews generally average $15–25, billed on token usage, and they scale based on PR complexity.
Code Review is available now as a research preview in beta for Team and Enterprise.
— Claude (@claudeai) March 9, 2026
Read the blog for more: https://t.co/UjqqaSeAre
