Skip to content

Configs + Prompts

Each endpoint that you implement can take it’s configuration from a .yaml file. These files are stored in the /config/prompts/ directory.

For example a basic config file for a chat endpoint might look like this:

simple-chat-yml
prompts:
# the main system prompt
system: |
You're a chatbot that can reply to messages about a variety of topics.
This is how you should respond:
- Be casual unless otherwise specified
- Don't apologise
- Be terse
- Suggest solutions that I didn't think about (anticipate my needs)
- Be accurate and thorough
- No need to disclose you're an AI
# options to pass directly to OpenAI
options:
max_tokens: 4096
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
# models to use for this request
# if a model is not available or fails
# then the next model is tried until the
# end of the list is reached
models:
- gpt-4-turbo
- gpt-4
- gpt-3.5-turbo

Each of the different API endpoints uses one of these config files, but you are free to add more or edit the existing ones as you like.

If you want to import a new config file then there is a utility function for that:

const chatSettings = getConfigFile('prompts/simple-chat.yml');

The getConfigFile function loads the yaml file and converts it to a JSON object for easy usage.

{
"prompts": {
"system": "You're a chatbot that can reply to messages about a variety of topics.\nThis is how you should respond:\n- Be casual unless otherwise specified\n- Don't apologise\n- Be terse\n- Suggest solutions that I didn't think about (anticipate my needs)\n- Be accurate and thorough\n- No need to disclose you're an AI"
},
"options": {
"max_tokens": 4096,
"temperature": 0.7,
"frequency_penalty": 0,
"presence_penalty": 0
},
"models": ["gpt-4-turbo", "gpt-4", "gpt-3.5-turbo"]
}