Skip to content

Custom models

StartKit.AI can be configured to use your own self-hosted LLM, or any other custom model that’s not directly supported.

Custom models file

Simply add your model to the config/models/custom.yml file like this:

config/models/custom.yml
custom/llama3.1:
maxTokens: 8192
maxInputTokens: 8192
maxOutputTokens: 4096
inputCostPerToken: 0
outputCostPerToken: 0
mode: chat
provider: ollama
host: http://localhost:11434

As long as the model is in the Portkey.ai Gateway supported list then it should work, as this is what StartKit.AI uses behind the scenes.

Then simply reference your model in the config file that you want to use it for. For example:

config/chat.yml
models:
- custom/llama3.1
- gpt-4o

API Key

If your custom model requires an API key, then StartKIt.AI will attempt to use one from the .env that matches the provider name. For example if your provider is named anthropic, we will try and use the ANTHROPIC_KEY env value to authenticate.