CustomLLMModel

interface CustomLLMModel : CustomLLMModelProperties, CommonModelProperties

Properties

Link copied to clipboard
abstract var assistantMessage: String

This is the starting state for the conversation for the assistant role.

Link copied to clipboard
abstract var emotionRecognitionEnabled: Boolean?

This determines whether we detect user's emotion while they speak and send it as an additional info to model.
Default `false` because the model is usually good at understanding the user's emotion from text.

Link copied to clipboard
abstract var functionMessage: String

This is the starting state for the conversation for the function role.

Link copied to clipboard
abstract var maxTokens: Int

This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Link copied to clipboard

This determines whether metadata is sent in requests to the custom provider.

  • off will not send any metadata. Payload will look like { messages }
  • variable will send assistant.metadata as a variable on the payload. Payload will look like { messages, metadata }
  • destructured will send assistant.metadata fields directly on the payload. Payload will look like { messages, ...metadata }
    Further, variable and destructured will send call, phoneNumber, and customer objects in the payload.
    Default is variable.

  • Link copied to clipboard
    abstract var model: String

    This is the name of the model.

    Link copied to clipboard
    abstract var numFastTurns: Int

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0.

    Link copied to clipboard
    abstract var systemMessage: String

    This is the starting state for the conversation for the system role.

    Link copied to clipboard
    abstract var temperature: Double

    This is the temperature that will be used for calls.

    Link copied to clipboard
    abstract val toolIds: MutableSet<String>

    These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.

    Link copied to clipboard
    abstract var toolMessage: String

    This is the starting state for the conversation for the tool role.

    Link copied to clipboard
    abstract var url: String

    This is the URL we'll use for the OpenAI client's baseURL.

    Link copied to clipboard
    abstract var userMessage: String

    This is the starting state for the conversation for the user role.

    Functions

    Link copied to clipboard
    abstract fun functions(block: Functions.() -> Unit): Functions

    This is the function definition of the tool. For endCall, transferCall, and dtmf tools, this is auto-filled based on tool-specific fields like tool.destinations. But, even in those cases, you can provide a custom function definition for advanced use cases. An example of an advanced use case is if you want to customize the message that's spoken for endCall tool. You can specify a function where it returns an argument "reason". Then, in messages array, you can have many "request-complete" messages. One of these messages will be triggered if the messages[].conditions matches the "reason" argument.

    Link copied to clipboard
    abstract fun knowledgeBase(block: KnowledgeBase.() -> Unit): KnowledgeBase

    These are the options for the knowledge base.

    Link copied to clipboard
    abstract fun tools(block: Tools.() -> Unit): Tools

    These are the tools that the assistant can use during the call. To use existing tools, use toolIds. Both tools and toolIds can be used together.