Artificial intelligence company Anthropic has launched Claude Code Security, a new AI-powered feature designed to scan enterprise software codebases for vulnerabilities and recommend targeted patches. The capability is currently available in limited research preview for Enterprise and Team customers.
According to the company, Claude Code Security analyzes codebases to detect security weaknesses and suggest fixes for human review. The goal is to help teams identify and remediate issues that traditional static analysis tools often overlook.
Anthropic said the feature responds to a growing cybersecurity challenge: as AI tools become more capable of identifying vulnerabilities, threat actors can use similar technologies to automate exploit discovery. Claude Code Security is intended to give defenders an advantage by leveraging AI to proactively strengthen security baselines.
Unlike conventional scanners that rely on predefined rules and known patterns, the system reportedly reasons through code like a human security researcher. It evaluates how components interact, traces data flows across applications, and flags complex vulnerabilities that may escape rule-based detection.
Each identified issue undergoes a multi-stage verification process to reduce false positives. Vulnerabilities are assigned severity ratings, helping security teams prioritize critical risks. Findings are presented in a dedicated dashboard where analysts can review both the flagged code and the suggested patches.
Anthropic emphasized that the system operates under a human-in-the-loop (HITL) model. Every recommendation includes a confidence rating, and no changes are implemented automatically. Developers retain full control, reviewing and approving suggested fixes before any action is taken.
With Claude Code Security, Anthropic is positioning AI not just as a development accelerator, but as a defensive tool in an era of increasingly automated cyber threats.