AI Providers

Groq

Groq is the fastest AI inference provider available. With hardware-level LPU acceleration, tokens stream at 500–1000 t/s — ideal for quick iteration during development.

Getting your API key

  1. 1Visit console.groq.com and create a free account.
  2. 2Go to API Keys in the sidebar and click Create API Key.
  3. 3Copy the key (it starts with gsk_).
  4. 4Paste it in Capsul under Settings → API Keys → Groq.

Available models

ModelSpeedQuality

Llama 4 Maverick

meta-llama/llama-4-maverick-17b-128e-instruct

FastExcellent

Llama 4 Scout

meta-llama/llama-4-scout-17b-16e-instruct

Very fastGreat

Llama 3.3 70B

meta-llama/llama-3.3-70b-versatile

FastVery good

Qwen3 32B

qwen/qwen3-32b

Very fastVery good

Kimi K1.5

moonshotai/kimi-k1.5-32k

FastGood

Llama 3.1 8B

meta-llama/llama-3.1-8b-instant

Extremely fastGood

Free tier

Groq has a generous free tier with daily and monthly token limits per model. For most Capsul users the free tier is sufficient — each app generation is a single API call of roughly 1,000–4,000 output tokens. You can check your usage at console.groq.com/usage.