ElevenLabsVoice

interface ElevenLabsVoice : ElevenLabsVoiceProperties

Properties

Link copied to clipboard
abstract var customModel: String

This enables specifying a model that doesn't already exist as an ElevenLabsVoiceModelType enum.

Link copied to clipboard
abstract var customVoiceId: String

This enables specifying a voice that doesn't already exist as an ElevenLabsVoiceIdType enum.

Link copied to clipboard
abstract var enableSsmlParsing: Boolean?

Defines the use of https://elevenlabs.io/docs/speech-synthesis/prompting#pronunciation. Disabled by default.

Link copied to clipboard
abstract var fillerInjectionEnabled: Boolean?

This determines whether fillers are injected into the model output before inputting it into the voice provider.
Default `false` because you can achieve better results with prompting the model.

Link copied to clipboard
abstract var inputMinCharacters: Int

This is the minimum number of characters before a chunk is created. The chunks that are sent to the voice provider for the voice generation as the model tokens are streaming in. Defaults to 30.
Increasing this value might add latency as it waits for the model to output a full chunk before sending it to the voice provider. On the other hand, increasing might be a good idea if you want to give voice provider bigger chunks, so it can pronounce them better.
Decreasing this value might decrease latency but might also decrease quality if the voice provider struggles to pronounce the text correctly.

Link copied to clipboard
abstract var inputPreprocessingEnabled: Boolean?

This determines whether the model output is preprocessed into chunks before being sent to the voice provider.
Default `true` because voice generation sounds better with chunking (and reformatting them).
To send every token from the model output directly to the voice provider and rely on the voice provider's audio generation logic, set this to `false`.
If disabled, vapi-provided audio control tokens like will not work.

Link copied to clipboard

These are the punctuations that are considered valid boundaries before a chunk is created. The chunks that are sent to the voice provider for the voice generation as the model tokens are streaming in. Defaults are chosen differently for each provider.
Constraining the delimiters might add latency as it waits for the model to output a full chunk before sending it to the voice provider. On the other hand, constraining might be a good idea if you want to give voice provider longer chunks, so it can sound less disjointed across chunks. Eg. ['.'].

Link copied to clipboard
abstract var inputReformattingEnabled: Boolean?

This determines whether the chunk is reformatted before being sent to the voice provider. Many things are reformatted including phone numbers, emails and addresses to improve their enunciation.
Default `true` because voice generation sounds better with reformatting.
To disable chunk reformatting, set this to `false`.
To disable chunking completely, set `inputPreprocessingEnabled` to `false`.

Link copied to clipboard

This is the model that will be used. Defaults to 'eleven_turbo_v2' if not specified.

Link copied to clipboard
abstract var optimizeStreaming: Double

Defines the optimize streaming latency for voice settings. Defaults to 3.

Link copied to clipboard
abstract var similarityBoost: Double

Defines the similarity boost for voice settings.

Link copied to clipboard
abstract var stability: Double

Defines the similarity boost for voice settings.

Link copied to clipboard
abstract var style: Double

Defines the style for voice settings.

Link copied to clipboard
abstract var useSpeakerBoost: Boolean?

Defines the use speaker boost for voice settings.

Link copied to clipboard

This is the provider-specific ID that will be used. Ensure the Voice is present in your 11Labs Voice Library.