BYOK (Bring Your Own Key)

Bring Your Own Key lets you connect any LLM provider to Creor -- whether it's a supported provider with your own API key, a self-hosted model, or any service that exposes an OpenAI-compatible API.

Overview

Creor supports two approaches for using your own keys:

  • Direct API keys -- use your own credentials for any of the 19+ built-in providers (Anthropic, OpenAI, Google, etc.).
  • Custom endpoints -- connect any service that implements the OpenAI-compatible chat/completions API format.

Both methods give you full control over billing, rate limits, and data routing. Your API keys are stored securely in the OS keychain and never transmitted to Creor servers.

Adding Provider API Keys

For any supported provider, you can use your own API key instead of the Creor Gateway.

Via Settings UI

  • Open Creor and go to Settings.
  • Navigate to the Providers section.
  • Find the provider you want to configure.
  • Enter your API key in the key field.
  • The key is encrypted and stored in the OS keychain immediately.

Via Environment Variables

Each provider has a standard environment variable for its API key:

ProviderEnvironment Variable
AnthropicANTHROPIC_API_KEY
OpenAIOPENAI_API_KEY
Google AI StudioGOOGLE_GENERATIVE_AI_API_KEY
Google VertexGOOGLE_CLOUD_PROJECT (+ gcloud ADC)
AWS BedrockAWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY
Azure OpenAIAZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT
OpenRouterOPENROUTER_API_KEY
GroqGROQ_API_KEY
Together AITOGETHER_AI_API_KEY
DeepInfraDEEPINFRA_API_KEY
CerebrasCEREBRAS_API_KEY
MistralMISTRAL_API_KEY
CohereCOHERE_API_KEY
PerplexityPERPLEXITY_API_KEY
xAIXAI_API_KEY
VercelVERCEL_API_KEY

Via creor.json

Warning

Do not store API keys in creor.json. This file is typically checked into version control. Use the Settings UI (OS keychain) or environment variables for credentials.

Use creor.json only for non-secret provider configuration like base URLs, timeouts, and model filtering:

1
2
3
4
5
6
7
8
{
"provider": {
"anthropic": {
"whitelist": ["claude-sonnet-4-20250514"],
"timeout": 600000
}
}
}

Custom OpenAI-Compatible Endpoints

Any service that implements the OpenAI chat/completions API format can be used as a provider in Creor. This includes LLM proxies, corporate API gateways, and OpenAI-compatible inference servers.

Defining a Custom Provider

Add a custom provider in creor.json using the @ai-sdk/openai-compatible npm adapter:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
"provider": {
"my-custom-llm": {
"name": "My Custom LLM",
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://my-llm-proxy.company.com/v1"
},
"models": {
"my-model-v1": {
"name": "My Model v1",
"capabilities": {
"toolcall": true,
"reasoning": false,
"temperature": true
},
"limit": {
"context": 128000,
"output": 8192
}
}
}
}
}
}

Then reference it like any other model:

1
2
3
{
"model": "my-custom-llm/my-model-v1"
}

Authentication for Custom Endpoints

Set the API key for your custom provider via the Settings UI (it will be stored under the provider ID you chose) or via a provider-specific environment variable. You can also add headers directly in the provider options for token-based auth:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"provider": {
"my-custom-llm": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://my-llm-proxy.company.com/v1",
"headers": {
"X-Custom-Auth": "Bearer token-here"
}
}
}
}
}

Self-Hosted Models

If you run your own LLM inference server (vLLM, Ollama, llama.cpp, TGI, etc.), you can connect it to Creor as long as it exposes an OpenAI-compatible API.

Example: Ollama

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"provider": {
"ollama": {
"name": "Ollama (Local)",
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"llama3.2:latest": {
"name": "Llama 3.2 (Local)",
"limit": {
"context": 128000,
"output": 4096
}
}
}
}
}
}

Example: vLLM

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"provider": {
"vllm": {
"name": "vLLM Server",
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://gpu-server.local:8000/v1"
},
"models": {
"meta-llama/Llama-3.3-70B-Instruct": {
"name": "Llama 3.3 70B (Self-Hosted)",
"limit": {
"context": 128000,
"output": 8192
}
}
}
}
}
}

Tip

When using self-hosted models, you typically do not need an API key. If your server requires one, set it through the Settings UI under the custom provider name.

Credential Storage

Creor stores API keys securely using the operating system's native credential manager:

OSStorage Backend
macOSKeychain (via SecretStorage API)
Linuxlibsecret / GNOME Keyring / KWallet
WindowsWindows Credential Manager

Keys entered through the Settings UI are encrypted at rest and never written to disk in plaintext. They are not included in creor.json or any other configuration file.

Provider Configuration Reference

The full provider configuration schema in creor.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
{
"provider": {
"<provider-id>": {
// Display name in the UI
"name": "My Provider",
 
// AI SDK adapter package
"npm": "@ai-sdk/openai-compatible",
 
// API base URL
"api": "https://api.example.com/v1",
 
// Options passed to the SDK adapter
"options": {
"baseURL": "https://api.example.com/v1",
"headers": { "X-Custom": "value" }
},
 
// Only show these models (model IDs)
"whitelist": ["model-a", "model-b"],
 
// Hide these models from the selector
"blacklist": ["model-c"],
 
// Define or override model metadata
"models": {
"model-a": {
"name": "Model A",
"capabilities": { "toolcall": true },
"limit": { "context": 128000, "output": 8192 }
}
},
 
// Request timeout in ms (default: 300000)
// Set to false to disable timeout
"timeout": 600000
}
}
}

Troubleshooting

Provider not showing up

  • Make sure the API key is set (via Settings UI or environment variable).
  • Check that the provider is not in your disabled_providers list.
  • If using enabled_providers, make sure your provider is included.
  • Restart Creor after changing environment variables.

Authentication errors

  • Verify your API key is valid and has not expired.
  • Check that your account has billing/credits set up with the provider.
  • For custom endpoints, verify the base URL is correct and the server is running.
  • Check Creor logs for detailed error messages.

Custom endpoint returning errors

  • Ensure the endpoint implements the OpenAI chat/completions format.
  • Verify the model ID matches what your server expects.
  • Check that the context and output limits in your config match the server's capabilities.
  • Try increasing the timeout if the server is slow to respond.