voice/modelslab/README.md
ModelsLab voice integration for Mastra. Provides text-to-speech using ModelsLab's TTS API.
npm install @mastra/voice-modelslab
import { ModelsLabVoice } from '@mastra/voice-modelslab';
const voice = new ModelsLabVoice({
speechModel: {
apiKey: process.env.MODELSLAB_API_KEY,
},
speaker: '5', // Female voice
});
// Text to speech
const audioStream = await voice.speak('Hello, world!', {
speaker: 'nova', // OpenAI-style voices also work: alloy, echo, fable, onyx, nova, shimmer
language: 'english',
speed: 1.0,
});
// List available voices
const speakers = await voice.getSpeakers();
| Option | Type | Default | Description |
|---|---|---|---|
speechModel.apiKey | string | MODELSLAB_API_KEY env | ModelsLab API key |
speaker | string | '1' | Default voice ID (1–6) or OpenAI voice name |
| ID | Name | Gender |
|---|---|---|
| 1 | Neutral | neutral |
| 2 | Male | male |
| 3 | Warm | male |
| 4 | Deep Male | male |
| 5 | Female | female |
| 6 | Clear Female | female |
OpenAI voice names are also accepted: alloy, echo, fable, onyx, nova, shimmer.
See ModelsLab TTS docs for full API details.
ModelsLab uses key-in-body authentication (MODELSLAB_API_KEY) and asynchronous audio generation with polling.