.clinerules/general.md
This file is the secret sauce for working effectively in this codebase. It captures tribal knowledge—the nuanced, non-obvious patterns that make the difference between a quick fix and hours of back-and-forth & human intervention.
When to add to this file:
Proactively suggest additions when any of the above happen—don't wait to be asked.
What NOT to add: Stuff you can figure out from reading a few files, obvious patterns, or standard practices. This file should be high-signal, not comprehensive.
package.json for available scripts before trying to verify builds (e.g., npm run compile, not npm run build).The extension and webview communicate via gRPC-like protocol over VS Code message passing.
Proto files live in proto/ (e.g., proto/cline/task.proto, proto/cline/ui.proto)
.proto fileproto/cline/common.proto (StringRequest, Empty, Int64Request).proto filePascalCaseService, RPCs camelCase, Messages PascalCasestream keyword (see subscribeToAuthCallback in account.proto)Run npm run protos after any proto changes—generates types in:
src/shared/proto/ - Shared type definitionssrc/generated/grpc-js/ - Service implementationssrc/generated/nice-grpc/ - Promise-based clientssrc/generated/hosts/ - Generated handlersAdding new enum values (like a new ClineSay type) requires updating conversion mappings in src/shared/proto-conversions/cline-message.ts
Adding new RPC methods requires:
src/core/controller/<domain>/UiServiceClient.scrollToSettings(StringRequest.create({ value: "browser" }))Example—the explain-changes feature touched:
proto/cline/task.proto - Added ExplainChangesRequest message and explainChanges RPCproto/cline/ui.proto - Added GENERATE_EXPLANATION = 29 to ClineSay enumsrc/shared/ExtensionMessage.ts - Added ClineSayGenerateExplanation typesrc/shared/proto-conversions/cline-message.ts - Added mapping for new say typesrc/core/controller/task/explainChanges.ts - Handler implementationwebview-ui/src/components/chat/ChatRow.tsx - UI renderingWhen adding a new provider (e.g., "openai-codex"), you must update the proto conversion layer in THREE places or the provider will silently reset to Anthropic:
proto/cline/models.proto - Add to the ApiProvider enum (e.g., OPENAI_CODEX = 40;)convertApiProviderToProto() in src/shared/proto-conversions/models/api-configuration-conversion.ts - Add case mapping string to proto enumconvertProtoToApiProvider() in the same file - Add case mapping proto enum back to stringWhy this matters: Without these, the provider string hits the default case and returns ANTHROPIC. The webview, provider list, and handler all work fine, but the state silently resets when it round-trips through proto serialization. No error is thrown.
Other files to update when adding a provider:
src/shared/api.ts - Add to ApiProvider union type, define modelssrc/shared/providers/providers.json - Add to provider list for dropdownsrc/core/api/index.ts - Register handler in createHandlerForProvider()webview-ui/src/components/settings/utils/providerUtils.ts - Add cases in getModelsForProvider() and normalizeApiConfiguration()webview-ui/src/utils/validate.ts - Add validation casewebview-ui/src/components/settings/ApiOptions.tsx - Render provider componentProviders using OpenAI's Responses API require native tool calling. XML tools don't work with the Responses API.
Symptoms of broken native tool calling:
ask_followup_question asks the same question twice)Root causes to check:
Provider missing from isNextGenModelProvider() in src/utils/model-utils.ts. The native variant matchers (e.g., native-gpt-5/config.ts) call this function. If your provider isn't in the list, the matcher returns false and falls back to XML tools.
Model missing apiFormat: ApiFormat.OPENAI_RESPONSES in its model info (src/shared/api.ts). This property signals that the model requires native tool calling. The task runner in src/core/task/index.ts checks this and forces enableNativeToolCalls: true regardless of user settings.
When adding a new Responses API provider:
isNextGenModelProvider() list in src/utils/model-utils.tsapiFormat: ApiFormat.OPENAI_RESPONSES on all models that use the Responses APIThis is tricky—multiple prompt variants and configs. Always search for existing similar tools first and follow their pattern. Look at the full chain from prompt definition → variant configs → handler → UI before implementing.
ClineDefaultTool enum in src/shared/tools.tssrc/core/prompts/system-prompt/tools/ (create file like generate_explanation.ts)
ModelFamily (generic, next-gen, xs, etc.)export const my_tool_variants = [GENERIC, NATIVE_NEXT_GEN, XS])ClineToolSet.getToolByNameWithFallback() automatically falls back to GENERIC. So you only need to export [GENERIC] unless the tool needs model-specific behavior.src/core/prompts/system-prompt/tools/init.ts - Import and spread into allToolVariantssrc/core/prompts/system-prompt/variants/*/config.ts. Add your tool's enum to the .tools() list:
generic/config.ts, next-gen/config.ts, gpt-5/config.ts, native-gpt-5/config.ts, native-gpt-5-1/config.ts, native-next-gen/config.ts, gemini-3/config.ts, glm/config.ts, hermes/config.ts, xs/config.tssrc/core/task/tools/handlers/ToolExecutor.ts if needed for execution flowsrc/core/assistant-message/index.ts if neededClineSay enum in proto, update src/shared/ExtensionMessage.ts, update src/shared/proto-conversions/cline-message.ts, update webview-ui/src/components/chat/ChatRow.tsxRead these first: src/core/prompts/system-prompt/README.md, tools/README.md, __tests__/README.md
System prompt is modular: components (reusable sections) + variants (model-specific configs) + templates (with {{PLACEHOLDER}} resolution).
Key directories:
components/ - Shared sections: rules.ts, capabilities.ts, editing_files.ts, etc.variants/ - Model-specific: generic/, next-gen/, xs/, gpt-5/, gemini-3/, hermes/, glm/, etc.templates/ - Template engine and placeholder definitionsVariant tiers (ask user which to modify):
next-gen/, native-next-gen/, native-gpt-5/, native-gpt-5-1/, gemini-3/, gpt-5/generic/xs/, hermes/, glm/How overrides work: Variants can override components via componentOverrides in their config.ts, or provide a custom template in template.ts (e.g., next-gen/template.ts exports rules_template). If no override, the shared component from components/ is used.
Example: Adding a rule to RULES section
rules_template in variants/*/template.ts or componentOverrides.RULES in config.tscomponents/rules.tstemplate.tsAfter any changes, regenerate snapshots:
UPDATE_SNAPSHOTS=true npm run test:unit
Snapshots live in __tests__/__snapshots__/. Tests validate across model families and context variations (browser, MCP, focus chain).
Three places need updates:
src/core/slash-commands/index.ts - Command definitionssrc/core/prompts/commands.ts - System prompt integrationwebview-ui/src/utils/slash-commands.ts - Webview autocompleteAdding a new key to global state requires updates in multiple places. Missing any step causes silent failures.
Required steps:
src/shared/storage/state-keys.ts - Add to GlobalState or Settings interfacesrc/core/storage/utils/state-helpers.ts:
const myKey = context.globalState.get<GlobalStateAndSettings["myKey"]>("myKey") in readGlobalStateFromDisk()myKey: myKey ?? defaultValue,setGlobalState()/getGlobalStateKey() after initializationCommon mistake: Adding only the return value without the context.globalState.get() call. This compiles but the value is always undefined on load.
Settings plumbing gotcha: if a key is user-toggleable from settings, wire both controller update paths:
src/core/controller/state/updateSettings.ts for webview updateSetting(...)src/core/controller/state/updateSettingsCli.ts for CLI/ACP settings updates
Missing one path causes a toggle to appear to change in one surface while the backend state stays unchanged.Webview toggle gotcha: settings changes must also round-trip back in state payloads.
UpdateSettingsRequest in proto/cline/state.proto (for webview update requests), then run npm run protosController.getStateToPostToWebview() (src/core/controller/index.ts)ExtensionState and webview defaults include the key (src/shared/ExtensionMessage.ts, webview-ui/src/context/ExtensionStateContext.tsx)
If this round-trip wiring is missing, the backend value can update but the toggle in webview appears stuck or reverts.StateManager uses an in-memory cache populated during StateManager.initialize(context) in common.ts. For most state, use controller.stateManager.setGlobalState()/getGlobalStateKey().
Exception: State needed immediately at extension startup (before cache is ready)
When Window A sets state and immediately opens Window B, the new window's StateManager cache is populated from context.globalState during initialization. If you need to read state in Window B right at startup (e.g., in common.ts during initialize()), read directly from context.globalState.get() instead of StateManager's cache.
Example pattern (see lastShownAnnouncementId and worktreeAutoOpenPath):
// Writing (normal pattern)
controller.stateManager.setGlobalState("myKey", value)
// Reading at startup in common.ts (bypass cache)
const value = context.globalState.get<string>("myKey")
This is only needed for cross-window state read during the brief startup window before StateManager cache is fully usable. Normal state access after initialization should use StateManager.
When a ChatRow displays a loading/in-progress state (spinner), you must handle what happens when the task is cancelled. This is non-obvious because cancellation doesn't update the message content—you have to infer it from context.
The pattern:
status field (e.g., "generating", "complete", "error") stored in message.text as JSON"generating" forever—no one updates it!isLast — if this message is no longer the last message, something else happened after it (interrupted)lastModifiedMessage?.ask === "resume_task" || "resume_completed_task" — task was just cancelled and is waiting to resumeExample from generate_explanation:
const wasCancelled =
explanationInfo.status === "generating" &&
(!isLast ||
lastModifiedMessage?.ask === "resume_task" ||
lastModifiedMessage?.ask === "resume_completed_task")
const isGenerating = explanationInfo.status === "generating" && !wasCancelled
Why both checks?
!isLast catches: cancelled → resumed → did other stuff → this old message is stalelastModifiedMessage?.ask === "resume_task" catches: just cancelled, hasn't resumed yet, this message is still technically "last"See also: BrowserSessionRow.tsx uses similar pattern with isLastApiReqInterrupted and isLastMessageResume.
Backend side: When streaming is cancelled, clean up properly (close tabs, clear comments, etc.) by checking taskState.abort after the streaming function returns.