showcase/shell-docs/src/content/reference/components/CopilotChatInput.mdx
CopilotChatInput is the primary text input and control surface for chat interactions. It provides a multi-line textarea that auto-grows up to a configurable maximum number of rows (default 5), action buttons for sending messages, voice transcription, and file attachment, and an optional tools menu for declarative command surfaces.
The component operates in three modes: "input" (default text entry), "transcribe" (replaces the textarea with an audio recorder), and "processing" (indicates background processing is underway). When text spans multiple rows, the layout stacks the textarea above the control row.
import { CopilotChatInput } from "@copilotkit/react-core/v2";
import "@copilotkit/react-core/v2/styles.css";
<PropertyReference name="mode" type='"input" | "transcribe" | "processing"' default='"input"'
The current operating mode of the input component. - "input" -- Standard
text entry mode with textarea and send button. - "transcribe" -- Replaces
the textarea with the audio recorder component for voice input. -
"processing" -- Indicates that background processing is underway.
</PropertyReference>
<PropertyReference name="onFinishTranscribeWithAudio" type="(audioBlob: Blob) => Promise<void>"
Callback invoked when the user finishes voice transcription with an audio
recording. Receives the recorded audio as a Blob. Use this when you need to
send the raw audio for server-side speech-to-text processing.
</PropertyReference>
<PropertyReference name="children" type="(props: { isMultiline: boolean }) => React.ReactElement"
Optional render-prop function that receives layout state. The isMultiline
flag is true when the textarea content spans multiple rows, which triggers
the stacked layout (textarea above the control row).
</PropertyReference>
All slot props follow the CopilotKit slot system: each accepts a replacement React component, a className string that is merged into the default component's classes, or a partial props object that extends the default component.
As a className:
<CopilotChatInput textArea="font-mono text-sm" />
As a replacement component:
<CopilotChatInput
sendButton={({ onClick }) => (
<button
onClick={onClick}
className="bg-blue-500 text-white px-3 py-1 rounded"
>
Send
</button>
)}
/>
The ToolsMenuItem type defines items in the tools menu. Items can trigger actions directly or contain nested submenus.
type ToolsMenuItem = { label: string } & (
| { action: () => void }
| { items: (ToolsMenuItem | "-")[] }
);
Use the string "-" as an array entry to render a visual separator between menu items:
toolsMenu={[
{ label: "Search", action: () => search() },
"-",
{ label: "Settings", action: () => openSettings() },
]}
Menu items with an items array render as expandable submenus:
toolsMenu={[
{
label: "Export",
items: [
{ label: "As PDF", action: () => exportPDF() },
{ label: "As CSV", action: () => exportCSV() },
"-",
{ label: "Print", action: () => printDoc() },
],
},
]}
A visual audio waveform component used during transcription mode. It displays a real-time waveform visualization of the microphone input and exposes an imperative API via ref.
The component exposes the following imperative methods via React ref:
function CustomRecorder() {
const recorderRef = useRef(null);
return (
<div>
<CopilotChatAudioRecorder ref={recorderRef} />
<button onClick={() => recorderRef.current?.start()}>Record</button>
<button onClick={() => recorderRef.current?.stop()}>Stop</button>
</div>
);
}
function ChatInput() {
const [value, setValue] = useState("");
return (
<CopilotChatInput
value={value}
onChange={setValue}
onSubmitMessage={(text) => {
sendMessage(text);
setValue("");
}}
autoFocus
/>
);
}
function VoiceInput() {
const [mode, setMode] = useState<"input" | "transcribe">("input");
const [value, setValue] = useState("");
return (
<CopilotChatInput
mode={mode}
value={value}
onChange={setValue}
onSubmitMessage={(text) => sendMessage(text)}
onStartTranscribe={() => setMode("transcribe")}
onCancelTranscribe={() => setMode("input")}
onFinishTranscribeWithAudio={async (blob) => {
const text = await transcribeAudio(blob);
setValue(text);
setMode("input");
}}
/>
);
}
function InputWithTools() {
return (
<CopilotChatInput
onSubmitMessage={(text) => sendMessage(text)}
toolsMenu={[
{ label: "Search the web", action: () => triggerSearch() },
{ label: "Analyze data", action: () => triggerAnalysis() },
"-",
{
label: "Export",
items: [
{ label: "As PDF", action: () => exportPDF() },
{ label: "As Markdown", action: () => exportMarkdown() },
],
},
]}
/>
);
}
function ChatInputWithAgent() {
const { agent } = useAgent();
return (
<CopilotChatInput
isRunning={agent.isRunning}
onSubmitMessage={(text) => {
agent.addMessage({
role: "user",
content: text,
id: crypto.randomUUID(),
});
agent.runAgent();
}}
onStop={() => agent.abortRun()}
autoFocus
/>
);
}
function AdaptiveInput() {
return (
<CopilotChatInput onSubmitMessage={(text) => sendMessage(text)}>
{({ isMultiline }) => (
<div
className={
isMultiline ? "border-2 border-blue-300 rounded-xl p-2" : ""
}
>
{isMultiline && (
<div className="text-xs text-gray-400 mb-1">Multi-line mode</div>
)}
</div>
)}
</CopilotChatInput>
);
}
function StyledInput() {
return (
<CopilotChatInput
onSubmitMessage={(text) => sendMessage(text)}
textArea="font-mono text-sm bg-gray-50 rounded-lg"
sendButton="bg-green-500 hover:bg-green-600 text-white rounded-full"
/>
);
}
mode to "transcribe" replaces the textarea with the audioRecorder slot component. The transcription control buttons (cancelTranscribeButton, finishTranscribeButton) replace the standard send button.value and onChange) and uncontrolled mode (internal state management).toolsMenu prop enables a declarative menu with support for flat items, separators ("-"), and nested submenus. The menu button is only rendered when toolsMenu is provided.CopilotChatView -- Higher-level chat view that composes this componentCopilotChatAudioRecorder -- Audio recorder used during transcription modeuseCopilotChatConfiguration -- Provider for localized input labels