AssistantOverrides

These are the overrides for the assistant or assistantId's settings and template variables.

Properties

Link copied to clipboard
abstract var backchannelingEnabled: Boolean?

This determines whether the model says 'mhmm', 'ahem' etc. while user is speaking.
Default false while in beta.

Link copied to clipboard
abstract var backgroundDenoisingEnabled: Boolean?

This enables filtering of noise and background speech while the user is talking.
Default false while in beta.

Link copied to clipboard

This is the background sound in the call. Default for phone calls is 'office' and default for web calls is 'off'.

Link copied to clipboard
abstract val clientMessages: MutableSet<AssistantClientMessageType>

These are the messages that will be sent to your Client SDKs. Default is CONVERSATION_UPDATE, FUNCTION_CALL, HANG, MODEL_OUTPUT, SPEECH_UPDATE, STATUS_UPDATE, TRANSCRIPT, TOOL_CALLS, USER_INTERRUPTED, and VOICE_INPUT. You can check the shape of the messages in ClientMessage schema.

Link copied to clipboard
abstract var dialKeypadFunctionEnabled: Boolean?
Link copied to clipboard
abstract var endCallFunctionEnabled: Boolean?
Link copied to clipboard
abstract var endCallMessage: String

This is the message that the assistant will say if it ends the call.
If unspecified, it will hang up without saying anything.

Link copied to clipboard
abstract val endCallPhrases: MutableSet<String>
Link copied to clipboard
abstract var firstMessage: String

This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.

Link copied to clipboard

This is the mode for the first message. Default is 'assistant-speaks-first'. Use:

  • 'assistant-speaks-first' to have the assistant speak first.
  • 'assistant-waits-for-user' to have the assistant wait for the user to speak first.
  • 'assistant-speaks-first-with-model-generated-message' to have the assistant speak first with a message generated by the model based on the conversation state. (assistant.model.messages at call start, call.messages at squad transfer points).
    @default 'assistant-speaks-first'

  • Link copied to clipboard
    abstract var forwardingPhoneNumber: String
    Link copied to clipboard
    abstract var hipaaEnabled: Boolean?

    When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.

    Link copied to clipboard
    abstract var llmRequestDelaySeconds: Double

    The minimum number of seconds to wait after transcription (with punctuation) before sending a request to the model. Defaults to 0.1.
    @default 0.1

    Link copied to clipboard

    The minimum number of seconds to wait after transcription (without punctuation) before sending a request to the model. Defaults to 1.5.
    @default 1.5

    Link copied to clipboard
    abstract var maxDurationSeconds: Int

    This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
    @default 1800 (~30 minutes)

    Link copied to clipboard
    abstract val metadata: MutableMap<String, String>
    Link copied to clipboard
    abstract var modelOutputInMessagesEnabled: Boolean?

    This determines whether the model's output is used in conversation history rather than the transcription of assistant's speech.
    Default false while in beta.

    Link copied to clipboard
    abstract var name: String

    This is the name of the assistant.
    This is required when you want to transfer between assistants in a call.

    Link copied to clipboard

    The number of words to wait for before interrupting the assistant.
    Words like "stop", "actually", "no", etc. will always interrupt immediately regardless of this value.
    Words like "okay", "yeah", "right" will never interrupt.
    When set to 0, it will rely solely on the VAD (Voice Activity Detector) and will not wait for any transcription. Defaults to this (0).
    @default 0

    Link copied to clipboard
    abstract var recordingEnabled: Boolean?

    This sets whether the assistant's calls are recorded. Defaults to true.

    Link copied to clipboard
    abstract var responseDelaySeconds: Double

    The minimum number of seconds after user speech to wait before the assistant starts speaking. Defaults to 0.4.
    @default 0.4

    Link copied to clipboard
    abstract val serverMessages: MutableSet<AssistantServerMessageType>

    These are the messages that will be sent to your Server URL. Default is CONVERSATION_UPDATE, END_OF_CALL_REPORT, FUNCTION_CALL, HANG, SPEECH_UPDATE, STATUS_UPDATE, TOOL_CALLS, TRANSFER_DESTINATION_REQUEST, USER_INTERRUPTED. You can check the shape of the messages in ServerMessage schema.

    Link copied to clipboard
    abstract var serverUrl: String

    This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports.
    All requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation.
    This overrides the serverUrl set on the org and the phoneNumber. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl

    Link copied to clipboard
    abstract var serverUrlSecret: String

    This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret.
    Same precedence logic as serverUrl.

    Link copied to clipboard
    abstract var silenceTimeoutSeconds: Int

    How many seconds of silence to wait before ending the call. Defaults to 30.

    Link copied to clipboard
    abstract val transportConfigurations: MutableList<TransportConfigurationDto>
    Link copied to clipboard
    abstract val variableValues: MutableMap<String, String>
    Link copied to clipboard
    abstract var videoRecordingEnabled: Boolean

    This determines whether the video is recorded during the call. Default is false. Only relevant for webCall type.

    Link copied to clipboard
    abstract var voicemailMessage: String

    This is the message that the assistant will say if the call is forwarded to voicemail.
    If unspecified, it will hang up.

    Functions

    Link copied to clipboard
    abstract fun analysisPlan(block: AnalysisPlan.() -> Unit): AnalysisPlanImpl

    This is the plan for analysis of assistant's calls. Stored in call.analysis.

    Link copied to clipboard
    abstract fun anthropicModel(block: AnthropicModel.() -> Unit): AnthropicModel

    Builder for the Anthropic model.

    Link copied to clipboard
    abstract fun anyscaleModel(block: AnyscaleModel.() -> Unit): AnyscaleModel

    Builder for the Anyscale model.

    Link copied to clipboard
    abstract fun artifactPlan(block: ArtifactPlan.() -> Unit): ArtifactPlanImpl

    This is the plan for artifacts generated during assistant's calls. Stored in call.artifact.
    Note: recordingEnabled is currently at the root level. It will be moved to artifactPlan in the future, but will remain backwards compatible.

    Link copied to clipboard
    abstract fun azureVoice(block: AzureVoice.() -> Unit): AzureVoice

    Builder for the Azure voice.

    Link copied to clipboard
    abstract fun cartesiaVoice(block: CartesiaVoice.() -> Unit): CartesiaVoice

    Builder for the Cartesia voice.

    Link copied to clipboard
    abstract fun customLLMModel(block: CustomLLMModel.() -> Unit): CustomLLMModel

    Builder for the CustomLLM model.

    Link copied to clipboard

    Builder for the Deepgram transcriber.

    Link copied to clipboard
    abstract fun deepgramVoice(block: DeepgramVoice.() -> Unit): DeepgramVoice

    Builder for the Deepgram voice.

    Link copied to clipboard
    abstract fun deepInfraModel(block: DeepInfraModel.() -> Unit): DeepInfraModel

    Builder for the DeepInfra model.

    Link copied to clipboard
    abstract fun elevenLabsVoice(block: ElevenLabsVoice.() -> Unit): ElevenLabsVoice

    Builder for the ElevenLabs voice.

    Link copied to clipboard
    abstract fun gladiaTranscriber(block: GladiaTranscriber.() -> Unit): GladiaTranscriber

    Builder for the Gladia transcriber.

    Link copied to clipboard
    abstract fun groqModel(block: GroqModel.() -> Unit): GroqModel

    Builder for the Groq model.

    Link copied to clipboard
    abstract fun lmntVoice(block: LMNTVoice.() -> Unit): LMNTVoice

    Builder for the LMNT voice.

    Link copied to clipboard
    abstract fun neetsVoice(block: NeetsVoice.() -> Unit): NeetsVoice

    Builder for the Neets voice.

    Link copied to clipboard
    abstract fun openAIModel(block: OpenAIModel.() -> Unit): OpenAIModel

    Builder for the OpenAI model.

    Link copied to clipboard
    abstract fun openAIVoice(block: OpenAIVoice.() -> Unit): OpenAIVoice

    Builder for the OpenAI voice.

    Link copied to clipboard
    abstract fun openRouterModel(block: OpenRouterModel.() -> Unit): OpenRouterModel

    Builder for the OpenRouter model.

    Link copied to clipboard
    abstract fun perplexityAIModel(block: PerplexityAIModel.() -> Unit): PerplexityAIModel

    Builder for the PerplexityAI model.

    Link copied to clipboard
    abstract fun playHTVoice(block: PlayHTVoice.() -> Unit): PlayHTVoice

    Builder for the PlayHT voice.

    Link copied to clipboard
    abstract fun rimeAIVoice(block: RimeAIVoice.() -> Unit): RimeAIVoice

    Builder for the RimeAI voice.

    Link copied to clipboard

    Builder for the Talkscriber transcriber.

    Link copied to clipboard
    abstract fun togetherAIModel(block: TogetherAIModel.() -> Unit): TogetherAIModel

    Builder for the TogetherAI model.

    Link copied to clipboard
    abstract fun vapiModel(block: VapiModel.() -> Unit): VapiModel

    Builder for the Vapi model.

    Link copied to clipboard

    These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=VoicemailTool. This uses Twilio's built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.