docs/chat/how-to-use-it.mdx
Chat makes it easy to ask for help from an LLM without needing to leave the IDE. You send it a task, including any relevant information, and it replies with the text / code most likely to complete the task. If it does not give you what you want, then you can send follow up messages to clarify and adjust its approach until the task is completed.
Chat is best used to understand and iterate on code or as a replacement for search engine queries.
To send a question, add it to the input box in the extention and press enter. You send it a question, and it replies with an answer. You tell it to solve a problem, and it provides you a solution. You ask for some code, and it generates it.
You select a code section with your mouse, press cmd/ctrl + L (VS Code) or cmd/ctrl + J (JetBrains) to send it to the LLM, and then ask for it to be explained to you or request it to be refactored in some way.
If there is information from the codebase, documentation, IDE, or other tools that you want to include as context, you can type @ to select and include it as context. You can learn more about how to use this in Chat context selection.
When the LLM replies with edits to a file, you can click the “Apply” button. This will update the existing code in the editor to reflect the suggested changes.
Once you complete a task and want to start a new one, press cmd/ctrl + L (VS Code) or cmd/ctrl + J (JetBrains) to begin a new session, ensuring only relevant context for the next task is provided to the LLM.
If you have configured multiple models, you can switch between models using the dropdown or by pressing cmd/ctrl + ’