AI_POLICY.md
At Quarkus, we welcome tools that help developers become more productive — including AI tools such as Large Language Models (LLMs) and agents like Claude Code, Codex, ChatGPT, GitHub Copilot, and others.
However, recent patterns of use have led to increased moderation burden, low-value contributions, and reduced community signal. To ensure a healthy and productive community, the following expectations apply to all contributions (issues, pull requests, comments, discussions, and other project interactions).
As a general rule, the actual PR and its metadata should be similar to what the developer would have come up with had they not been using AI tools. For example, when writing PR descriptions, developers tend to make it clear what the PR does and don't pack the description with useless details or superfluous formatting and emojis. Furthermore, developers usually try to introduce focused tests that follow the project's existing test philosophy instead of dumping a bunch of new tests for everything that could conceivably be tested concerning the updated code.
If you're unsure whether your use of agents/LLMs is acceptable — ask! We're happy to help contributors learn how to use AI tools effectively without creating noise.
This isn’t about banning AI — it’s about keeping Quarkus collaborative, human-driven, and focused on quality.