PlayHTVoice
Properties
This enables specifying a voice that doesn't already exist as an PlayHTVoiceIdType enum.
An emotion to be applied to the speech.
This determines whether fillers are injected into the model output before inputting it into the voice provider.
Default `false` because you can achieve better results with prompting the model.
This is the minimum number of characters before a chunk is created. The chunks that are sent to the voice provider
for the voice generation as the model tokens are streaming in. Defaults to 30.
Increasing this value might add latency as it waits for the model to output a full chunk before sending it to the
voice provider. On the other hand, increasing might be a good idea if you want to give voice provider bigger chunks,
so it can pronounce them better.
Decreasing this value might decrease latency but might also decrease quality if the voice provider struggles to
pronounce the text correctly.
This determines whether the model output is preprocessed into chunks before being sent to the voice provider.
Default `true` because voice generation sounds better with chunking (and reformatting them).
To send every token from the model output directly to the voice provider and rely on the voice provider's audio
generation logic, set this to `false`.
If disabled, vapi-provided audio control tokens like
These are the punctuations that are considered valid boundaries before a chunk is created. The chunks that are sent
to the voice provider for the voice generation as the model tokens are streaming in. Defaults are chosen differently
for each provider.
Constraining the delimiters might add latency as it waits for the model to output a full chunk before sending it to
the voice provider. On the other hand, constraining might be a good idea if you want to give voice provider longer
chunks, so it can sound less disjointed across chunks. Eg. ['.'].
This determines whether the chunk is reformatted before being sent to the voice provider. Many things are reformatted
including phone numbers, emails and addresses to improve their enunciation.
Default `true` because voice generation sounds better with reformatting.
To disable chunk reformatting, set this to `false`.
To disable chunking completely, set `inputPreprocessingEnabled` to `false`.
A number between 1 and 30. Use lower numbers to to reduce how strong your chosen emotion will be. Higher numbers will create a very emotional performance.
A floating point number between 0, exclusive, and 2, inclusive. If equal to null or not provided, the model's default temperature will be used. The temperature parameter controls variance. Lower temperatures result in more predictable results, higher temperatures allow each run to vary more, so the voice may sound less like the baseline voice.
A number between 1 and 2. This number influences how closely the generated speech adheres to the input text. Use lower values to create more fluid speech, but with a higher chance of deviating from the input text. Higher numbers will make the generated speech more accurate to the input text, ensuring that the words spoken align closely with the provided text.
A number between 1 and 6. Use lower numbers to reduce how unique your chosen voice will be compared to other voices.
This is the provider-specific ID that will be used.