CustomLLMModelProperties

Properties

Link copied to clipboard

This determines whether we detect user's emotion while they speak and send it as an additional info to model.
Default `false` because the model is usually good at understanding the user's emotion from text.

Link copied to clipboard
abstract var maxTokens: Int

This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Link copied to clipboard

This determines whether metadata is sent in requests to the custom provider.

  • off will not send any metadata. Payload will look like { messages }
  • variable will send assistant.metadata as a variable on the payload. Payload will look like { messages, metadata }
  • destructured will send assistant.metadata fields directly on the payload. Payload will look like { messages, ...metadata }
    Further, variable and destructured will send call, phoneNumber, and customer objects in the payload.
    Default is variable.

  • Link copied to clipboard
    abstract var model: String

    This is the name of the model.

    Link copied to clipboard
    abstract var numFastTurns: Int

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0.

    Link copied to clipboard
    abstract var temperature: Double

    This is the temperature that will be used for calls.

    Link copied to clipboard
    abstract val toolIds: MutableSet<String>

    These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.

    Link copied to clipboard
    abstract var url: String

    This is the URL we'll use for the OpenAI client's baseURL.