docs/src/content/en/reference/voice/google-gemini-live.mdx
The GeminiLiveVoice class provides real-time voice interaction capabilities using Google's Gemini Live API. It supports bidirectional audio streaming, tool calling, session management, and both standard Google API and Vertex AI authentication methods.
import { GeminiLiveVoice } from '@mastra/voice-google-gemini-live'
import { playAudio, getMicrophoneStream } from '@mastra/node-audio'
// Initialize with Gemini API (using API key)
const voice = new GeminiLiveVoice({
apiKey: process.env.GOOGLE_API_KEY, // Required for Gemini API
model: 'gemini-2.0-flash-exp',
speaker: 'Puck', // Default voice
debug: true,
})
// Or initialize with Vertex AI (using OAuth)
const voiceWithVertexAI = new GeminiLiveVoice({
vertexAI: true,
project: 'your-gcp-project',
location: 'us-central1',
serviceAccountKeyFile: '/path/to/service-account.json',
model: 'gemini-2.0-flash-exp',
speaker: 'Puck',
})
// Or use the VoiceConfig pattern (recommended for consistency with other providers)
const voiceWithConfig = new GeminiLiveVoice({
speechModel: {
name: 'gemini-2.0-flash-exp',
apiKey: process.env.GOOGLE_API_KEY,
},
speaker: 'Puck',
realtimeConfig: {
model: 'gemini-2.0-flash-exp',
apiKey: process.env.GOOGLE_API_KEY,
options: {
debug: true,
sessionConfig: {
interrupts: { enabled: true },
},
},
},
})
// Establish connection (required before using other methods)
await voice.connect()
// Set up event listeners
voice.on('speaker', audioStream => {
// Handle audio stream (NodeJS.ReadableStream)
playAudio(audioStream)
})
voice.on('writing', ({ text, role }) => {
// Handle transcribed text
console.log(`${role}: ${text}`)
})
voice.on('turnComplete', ({ timestamp }) => {
// Handle turn completion
console.log('Turn completed at:', timestamp)
})
// Convert text to speech
await voice.speak('Hello, how can I help you today?', {
speaker: 'Charon', // Override default voice
responseModalities: ['AUDIO', 'TEXT'],
})
// Process audio input
const microphoneStream = getMicrophoneStream()
await voice.send(microphoneStream)
// Update session configuration
await voice.updateSessionConfig({
speaker: 'Kore',
instructions: 'Be more concise in your responses',
})
// When done, disconnect
await voice.disconnect()
// Or use the synchronous wrapper
voice.close()
<PropertiesTable content={[ { name: 'apiKey', type: 'string', description: 'Google API key for Gemini API authentication. Required unless using Vertex AI.', isOptional: true, }, { name: 'model', type: 'GeminiVoiceModel', description: 'The model ID to use for real-time voice interactions.', isOptional: true, defaultValue: "'gemini-2.0-flash-exp'", }, { name: 'speaker', type: 'GeminiVoiceName', description: 'Default voice ID for speech synthesis.', isOptional: true, defaultValue: "'Puck'", }, { name: 'vertexAI', type: 'boolean', description: 'Use Vertex AI instead of Gemini API for authentication.', isOptional: true, defaultValue: 'false', }, { name: 'project', type: 'string', description: 'Google Cloud project ID (required for Vertex AI).', isOptional: true, }, { name: 'location', type: 'string', description: 'Google Cloud region for Vertex AI.', isOptional: true, defaultValue: "'us-central1'", }, { name: 'serviceAccountKeyFile', type: 'string', description: 'Path to service account JSON key file for Vertex AI authentication.', isOptional: true, }, { name: 'serviceAccountEmail', type: 'string', description: 'Service account email for impersonation (alternative to key file).', isOptional: true, }, { name: 'instructions', type: 'string', description: 'System instructions for the model.', isOptional: true, }, { name: 'sessionConfig', type: 'GeminiSessionConfig', description: 'Session configuration including interrupt and context settings.', isOptional: true, properties: [ { type: 'GeminiSessionConfig', parameters: [ { name: 'interrupts', type: 'object', description: 'Interrupt handling configuration.', isOptional: true, }, { name: 'interrupts.enabled', type: 'boolean', description: 'Enable interrupt handling.', isOptional: true, defaultValue: 'true', }, { name: 'interrupts.allowUserInterruption', type: 'boolean', description: 'Allow user to interrupt model responses.', isOptional: true, defaultValue: 'true', }, { name: 'contextCompression', type: 'boolean', description: 'Enable automatic context compression.', isOptional: true, defaultValue: 'false', }, ], }, ], }, { name: 'debug', type: 'boolean', description: 'Enable debug logging for troubleshooting.', isOptional: true, defaultValue: 'false', }, ]} />
connect()Establishes a connection to the Gemini Live API. Must be called before using speak, listen, or send methods.
<PropertiesTable content={[ { name: 'requestContext', type: 'object', description: 'Optional request context for the connection.', isOptional: true, }, { name: 'returns', type: 'Promise<void>', description: 'Promise that resolves when the connection is established.', }, ]} />
speak()Converts text to speech and sends it to the model. Can accept either a string or a readable stream as input.
<PropertiesTable content={[ { name: 'input', type: 'string | NodeJS.ReadableStream', description: 'Text or text stream to convert to speech.', isOptional: false, }, { name: 'options', type: 'GeminiLiveVoiceOptions', description: 'Optional speech configuration.', isOptional: true, properties: [ { type: 'GeminiLiveVoiceOptions', parameters: [ { name: 'speaker', type: 'GeminiVoiceName', description: 'Voice ID to use for this specific speech request.', isOptional: true, defaultValue: "Constructor's speaker value", }, { name: 'languageCode', type: 'string', description: 'Language code for the response.', isOptional: true, }, { name: 'responseModalities', type: "('AUDIO' | 'TEXT')[]", description: 'Response modalities to receive from the model.', isOptional: true, defaultValue: "['AUDIO', 'TEXT']", }, ], }, ], }, ]} />
Returns: Promise<void> (responses are emitted via speaker and writing events)
listen()Processes audio input for speech recognition. Takes a readable stream of audio data and returns the transcribed text.
<PropertiesTable content={[ { name: 'audioStream', type: 'NodeJS.ReadableStream', description: 'Audio stream to transcribe.', isOptional: false, }, { name: 'options', type: 'GeminiLiveVoiceOptions', description: 'Optional listening configuration.', isOptional: true, }, ]} />
Returns: Promise<string> - The transcribed text
send()Streams audio data in real-time to the Gemini service for continuous audio streaming scenarios like live microphone input.
<PropertiesTable content={[ { name: 'audioData', type: 'NodeJS.ReadableStream | Int16Array', description: 'Audio stream or buffer to send to the service.', isOptional: false, }, ]} />
Returns: Promise<void>
updateSessionConfig()Updates the session configuration dynamically. This can be used to modify voice settings, speaker selection, and other runtime configurations.
<PropertiesTable content={[ { name: 'config', type: 'Partial<GeminiLiveVoiceConfig>', description: 'Configuration updates to apply.', isOptional: false, }, ]} />
Returns: Promise<void>
addTools()Adds a set of tools to the voice instance. Tools allow the model to perform additional actions during conversations. When GeminiLiveVoice is added to an Agent, any tools configured for the Agent will automatically be available to the voice interface.
<PropertiesTable content={[ { name: 'tools', type: 'ToolsInput', description: 'Tools configuration to equip.', isOptional: false, }, ]} />
Returns: void
addInstructions()Adds or updates system instructions for the model.
<PropertiesTable content={[ { name: 'instructions', type: 'string', description: 'System instructions to set.', isOptional: true, }, ]} />
Returns: void
answer()Triggers a response from the model. This method is primarily used internally when integrated with an Agent.
<PropertiesTable content={[ { name: 'options', type: 'Record<string, unknown>', description: 'Optional parameters for the answer request.', isOptional: true, }, ]} />
Returns: Promise<void>
getSpeakers()Returns a list of available voice speakers for the Gemini Live API.
Returns: Promise<Array<{ voiceId: string; description?: string }>>
disconnect()Disconnects from the Gemini Live session and cleans up resources. This is the async method that properly handles cleanup.
Returns: Promise<void>
close()Synchronous wrapper for disconnect(). Calls disconnect() internally without awaiting.
Returns: void
on()Registers an event listener for voice events.
<PropertiesTable content={[ { name: 'event', type: 'string', description: 'Name of the event to listen for.', isOptional: false, }, { name: 'callback', type: 'Function', description: 'Function to call when the event occurs.', isOptional: false, }, ]} />
Returns: void
off()Removes a previously registered event listener.
<PropertiesTable content={[ { name: 'event', type: 'string', description: 'Name of the event to stop listening to.', isOptional: false, }, { name: 'callback', type: 'Function', description: 'The specific callback function to remove.', isOptional: false, }, ]} />
Returns: void
The GeminiLiveVoice class emits the following events:
<PropertiesTable content={[ { name: 'speaker', type: 'event', description: 'Emitted when audio data is received from the model. Callback receives a NodeJS.ReadableStream.', }, { name: 'speaking', type: 'event', description: 'Emitted with audio metadata. Callback receives { audioData?: Int16Array, sampleRate?: number }.', }, { name: 'writing', type: 'event', description: "Emitted when transcribed text is available. Callback receives { text: string, role: 'assistant' | 'user' }.", }, { name: 'session', type: 'event', description: "Emitted on session state changes. Callback receives { state: 'connecting' | 'connected' | 'disconnected' | 'disconnecting' | 'updated', config?: object }.", }, { name: 'turnComplete', type: 'event', description: 'Emitted when a conversation turn is completed. Callback receives { timestamp: number }.', }, { name: 'toolCall', type: 'event', description: 'Emitted when the model requests a tool call. Callback receives { name: string, args: object, id: string }.', }, { name: 'usage', type: 'event', description: 'Emitted with token usage information. Callback receives { inputTokens: number, outputTokens: number, totalTokens: number, modality: string }.', }, { name: 'error', type: 'event', description: 'Emitted when an error occurs. Callback receives { message: string, code?: string, details?: unknown }.', },
{ name: 'interrupt', type: 'event', description: "Interrupt events. Callback receives { type: 'user' | 'model', timestamp: number }.", }, ]} />
The following Gemini Live models are available:
gemini-2.0-flash-exp (default)gemini-2.0-flash-exp-image-generationgemini-2.0-flash-live-001gemini-live-2.5-flash-preview-native-audiogemini-2.5-flash-exp-native-audio-thinking-dialoggemini-live-2.5-flash-previewgemini-2.6.flash-preview-ttsThe following voice options are available:
Puck (default): Conversational, friendlyCharon: Deep, authoritativeKore: Neutral, professionalFenrir: Warm, approachableThe simplest method using an API key from Google AI Studio:
const voice = new GeminiLiveVoice({
apiKey: 'your-api-key', // Required for Gemini API
model: 'gemini-2.0-flash-exp',
})
For production use with OAuth authentication and Google Cloud Platform:
// Using service account key file
const voice = new GeminiLiveVoice({
vertexAI: true,
project: 'your-gcp-project',
location: 'us-central1',
serviceAccountKeyFile: '/path/to/service-account.json',
})
// Using Application Default Credentials
const voice = new GeminiLiveVoice({
vertexAI: true,
project: 'your-gcp-project',
location: 'us-central1',
})
// Using service account impersonation
const voice = new GeminiLiveVoice({
vertexAI: true,
project: 'your-gcp-project',
location: 'us-central1',
serviceAccountEmail: '[email protected]',
})
The Gemini Live API supports session resumption for handling network interruptions:
voice.on('sessionHandle', ({ handle, expiresAt }) => {
// Store session handle for resumption
saveSessionHandle(handle, expiresAt)
})
// Resume a previous session
const voice = new GeminiLiveVoice({
sessionConfig: {
enableResumption: true,
maxDuration: '2h',
},
})
Enable the model to call functions during conversations:
import { z } from 'zod'
voice.addTools({
weather: {
description: 'Get weather information',
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
const weather = await getWeather(location)
return weather
},
},
})
voice.on('toolCall', ({ name, args, id }) => {
console.log(`Tool called: ${name} with args:`, args)
})
connect() before using other methodsclose() when done to properly clean up resourcesaiplatform.user role)