ai.languages
Use ai.languages to configure the spoken language of your AI Agent, as well as the TTS engine, voice, and fillers.
| Name | Type | Default | Description |
|---|---|---|---|
languagesOptional | object | - | An object that accepts the languages parameters. |
Use ai.languages to configure the spoken language of your AI Agent, as well as the TTS engine, voice, and fillers.
Parameters for the languages object
| Name | Type | Default | Description |
|---|---|---|---|
nameRequired | string | English | Name of the language ("French", "English", etc). This value is used in the system prompt to instruct the LLM what language is being spoken. |
codeRequired | string | en-US | Set the language code for ASR Automatic Speech Recognition Speech-to-text If a different STT model was selected using the openai_asr_engine parameter, you must select a code supported by that engine. |
voiceRequired | string | Standard-tier voice picked by SignalWire | String format: <engine id>.<voice id>.Select engine from gcloud, polly, elevenlabs, or deepgram. Select voice from TTS provider reference.For example, "gcloud.fr-FR-Neural2-B". |
emotionOptional | string | None | Enables emotion for the set TTS engine. This allows the AI to express emotions when speaking. A global emotion or specific emotions for certain topics can be set within the prompt of the AI. Valid values:* autoIMPORTANT: Only works with Cartesia TTS engine. |
function_fillersOptional | string[] | None | An array of strings to be used as fillers in the conversation when the agent is calling a SWAIG function. The filler is played asynchronously during the function call. |
modelOptional | string | None | The model to use for the specified TTS engine (e.g. arcana). Check the TTS provider reference for the available models. |
speech_fillersOptional | string[] | None | An array of strings to be used as fillers in the conversation. This helps the AI break silence between responses. Note that speech_fillers are used between every 'turn' taken by the LLM, including at the beginning of the call. For more targed fillers, consider using function_fillers. |
speedOptional | string | None | The speed to use for the specified TTS engine. This allows the AI to speak at a different speed at different points in the conversation. The speed behavior can be defined in the prompt of the AI. Valid values:* autoIMPORTANT: Only works with Cartesia TTS engine. |
fillersOptional | string[] | None | An array of strings to be used as fillers in the conversation and when the agent is calling a SWAIG function.Deprecated: Use speech_fillers and function_fillers instead. |
engineOptional | string | gcloud | The engine to use for the language. For example, "elevenlabs".Deprecated. Set the engine with the voice parameter. |
Use voice strings
Compose the voice string using the <engine id>.<voice id> syntax.
First, select your engine using the gcloud, polly, elevenlabs, or deepgram identifier.
Append a period (.), and then the specific voice ID (for example, en-US-Casual-K) from the TTS provider.
Refer to SignalWire's Supported Voices and Languages
for guides on configuring voice IDs strings for each provider.
Supported voices and languages
SignalWire's cloud platform integrates with leading text-to-speech providers. For a comprehensive list of supported engines, languages, and voices, refer to our documentation on Supported Voices and Languages.
Examples
Set a single language
SWML will automatically assign the language (and other required parameters) to the defaults in the above table if left unset.
This example uses ai.language to configure a specific English-speaking voice from ElevenLabs.
- YAML
- JSON
languages:
- name: English
code: en-US
voice: elevenlabs.rachel
speech_fillers:
- one moment please,
- hmm...
- let's see,
{
"languages": [
{
"name": "English",
"code": "en-US",
"voice": "elevenlabs.rachel",
"speech_fillers": [
"one moment please,",
"hmm...",
"let's see,"
]
}
]
}
Set multiple languages
SWML will automatically assign the language (and other required parameters) to the defaults in the above table if left unset.
This example uses ai.language to configure multiple languages using different TTS engines.
- YAML
- JSON
languages:
- name: Mandarin
code: cmn-TW
voice: gcloud.cmn-TW-Standard-A
- name: English
code: en-US
voice: elevenlabs.rachel
{
"languages": [
{
"name": "Mandarin",
"code": "cmn-TW",
"voice": "gcloud.cmn-TW-Standard-A"
},
{
"name": "English",
"code": "en-US",
"voice": "elevenlabs.rachel"
}
]
}