docs/security/shepherding-ai-reports.md
Chrome Security is getting an increasing amount of vulnerability reports that are either partly or entirely AI-generated. These reports can be tough to shepherd because they are often nonsense, and even when they aren't nonsense the vital information about the bug is buried in a pile of AI-generated words.
Since AI-assisted reports are often speculative, you should also consult the speculative bug triage guide.
There are a few tells for spotting AI reports. Some of these can have false negatives so seeing one of these tells doesn't necessarily provide ironclad proof that a report is generated using an AI, but it should make you very suspicious.
Just because a reporter used AI to prepare a report does not automatically mean the report is invalid, but to avoid sinking a lot of time into reports which have a high probability of being invalid, you should be extra aggressive when triaging them, and you can generally treat them as lower priority to triage than reports which look human-written and high-quality. In particular, when triaging an AI report:
Don't bother doing detailed analysis of any AI report that doesn't have a simple PoC which looks like it could work, or a stack trace which looks valid. In particular, be very skeptical of AI reports claiming overflows, UaFs, etc that contain prose explanations of how to reach those conditions - AIs will invent, and then plausibly lie about, execution traces that lead to vulnerabilities but that aren't actually possible in practice. Never take at face value a claim from an AI report that a vulnerability is reachable unless it contains a PoC or an ASAN stack trace. Feel free to WontFix such reports out of hand and spend your time on more valuable things.
If you do conclude that a bug is both AI-written and worth WontFixing, please reference the FAQ entry on AI bugs as part of your WontFix message, to encourage reporters to file better bugs.