Open Source & Other Providers

Creor supports a wide range of inference providers that host open-source models, specialized models, and fast-inference hardware. Each provider has different strengths -- from raw speed to specialized capabilities.

Provider Comparison

ProviderKey ModelsAuth Env VarBest For
GroqLlama 3.x, Mixtral, GemmaGROQ_API_KEYUltra-fast inference (LPU hardware)
Together AILlama 3.x, Qwen 2.5, DeepSeek, MixtralTOGETHER_AI_API_KEYWide open-source model selection
DeepInfraLlama 3.x, Mistral, QwenDEEPINFRA_API_KEYCost-effective open models
CerebrasLlama 3.xCEREBRAS_API_KEYWafer-scale inference speed
MistralMistral Large, Codestral, MinistralMISTRAL_API_KEYEuropean AI, code generation
CohereCommand R, Command R+COHERE_API_KEYRAG and enterprise search
PerplexitySonar Pro, SonarPERPLEXITY_API_KEYWeb-grounded, up-to-date answers
xAIGrok 3, Grok 3 MiniXAI_API_KEYReasoning, real-time knowledge
Vercelv0 modelsVERCEL_API_KEYVercel platform integration

Tip

All of these providers follow the same setup pattern: get an API key, set it as an environment variable or in the Settings UI, and reference models with the provider/model-id format.

Groq

Groq runs inference on custom LPU (Language Processing Unit) hardware, delivering some of the fastest token generation speeds available. Ideal for rapid iteration and tasks where latency matters more than model size.

Setup

  • Sign up at console.groq.com.
  • Create an API key from the dashboard.
  • Set the environment variable or add the key in Creor Settings.
export GROQ_API_KEY="gsk_your-key-here"

Configuration

1
2
3
{
"model": "groq/llama-3.3-70b-versatile"
}

Popular Models

ModelModel IDParameters
Llama 3.3 70Bllama-3.3-70b-versatile70B
Llama 3.1 8Bllama-3.1-8b-instant8B
Mixtral 8x7Bmixtral-8x7b-3276846.7B (MoE)
Gemma 2 9Bgemma2-9b-it9B

Together AI

Together AI hosts the widest selection of open-source models, from Llama and Qwen to DeepSeek and Mixtral. It offers both serverless and dedicated inference options.

Setup

  • Sign up at api.together.xyz.
  • Create an API key from your account dashboard.
  • Set the environment variable or add the key in Creor Settings.
export TOGETHER_AI_API_KEY="your-key-here"

Configuration

1
2
3
{
"model": "togetherai/meta-llama/Llama-3.3-70B-Instruct-Turbo"
}

Popular Models

ModelModel IDParameters
Llama 3.3 70B Turbometa-llama/Llama-3.3-70B-Instruct-Turbo70B
Qwen 2.5 72BQwen/Qwen2.5-72B-Instruct-Turbo72B
DeepSeek V3deepseek-ai/DeepSeek-V3671B (MoE)
Mixtral 8x22Bmistralai/Mixtral-8x22B-Instruct-v0.1141B (MoE)

DeepInfra

DeepInfra provides cost-effective inference for popular open-source models with competitive pricing and low latency.

Setup

  • Sign up at deepinfra.com.
  • Get your API key from the dashboard.
  • Set the environment variable or add the key in Creor Settings.
export DEEPINFRA_API_KEY="your-key-here"

Configuration

1
2
3
{
"model": "deepinfra/meta-llama/Llama-3.3-70B-Instruct"
}

Cerebras

Cerebras uses wafer-scale engine (WSE) chips to deliver extremely fast inference. Currently supports Llama models with industry-leading tokens-per-second throughput.

Setup

  • Sign up at cloud.cerebras.ai.
  • Create an API key from the console.
  • Set the environment variable or add the key in Creor Settings.
export CEREBRAS_API_KEY="your-key-here"

Configuration

1
2
3
{
"model": "cerebras/llama-3.3-70b"
}

Mistral

Mistral is a European AI company offering models optimized for code generation, multilingual tasks, and efficient inference. Their Codestral model is purpose-built for coding.

Setup

  • Sign up at console.mistral.ai.
  • Create an API key from the dashboard.
  • Set the environment variable or add the key in Creor Settings.
export MISTRAL_API_KEY="your-key-here"

Configuration

1
2
3
{
"model": "mistral/mistral-large-latest"
}

Popular Models

ModelModel IDBest For
Mistral Largemistral-large-latestComplex reasoning, multilingual
Codestralcodestral-latestCode generation and completion
Ministral 8Bministral-8b-latestFast, lightweight tasks

Cohere

Cohere specializes in enterprise AI with models optimized for retrieval-augmented generation (RAG) and search. Their Command R models excel at grounded, factual responses.

Setup

  • Sign up at dashboard.cohere.com.
  • Create an API key from the API keys page.
  • Set the environment variable or add the key in Creor Settings.
export COHERE_API_KEY="your-key-here"

Configuration

1
2
3
{
"model": "cohere/command-r-plus"
}

Perplexity

Perplexity's Sonar models are grounded in real-time web search results, making them excellent for questions that require up-to-date information about libraries, APIs, or recent changes.

Setup

  • Sign up at perplexity.ai and access the API section.
  • Create an API key.
  • Set the environment variable or add the key in Creor Settings.
export PERPLEXITY_API_KEY="pplx-your-key-here"

Configuration

1
2
3
{
"model": "perplexity/sonar-pro"
}

xAI (Grok)

xAI's Grok models combine strong reasoning with access to real-time knowledge. Grok 3 is competitive with frontier models on coding and reasoning benchmarks.

Setup

  • Sign up at console.x.ai.
  • Create an API key from the dashboard.
  • Set the environment variable or add the key in Creor Settings.
export XAI_API_KEY="xai-your-key-here"

Configuration

1
2
3
{
"model": "xai/grok-3"
}

Available Models

ModelModel IDBest For
Grok 3grok-3Complex reasoning, full capability
Grok 3 Minigrok-3-miniFast reasoning, lower cost

Vercel

Vercel provides model access through the Vercel AI platform. This is useful for teams already using the Vercel ecosystem.

Setup

  • Go to vercel.com and sign in.
  • Navigate to your account settings and find the API tokens section.
  • Create a token with the appropriate permissions.
export VERCEL_API_KEY="your-vercel-token"

Configuration

1
2
3
{
"model": "vercel/v0-1.0-md"
}