AI Providers
Groq
Groq is the fastest AI inference provider available. With hardware-level LPU acceleration, tokens stream at 500–1000 t/s — ideal for quick iteration during development.
Getting your API key
- 1Visit console.groq.com and create a free account.
- 2Go to API Keys in the sidebar and click Create API Key.
- 3Copy the key (it starts with
gsk_). - 4Paste it in Capsul under Settings → API Keys → Groq.
Available models
| Model | Speed | Quality |
|---|---|---|
Llama 4 Maverick meta-llama/llama-4-maverick-17b-128e-instruct | Fast | Excellent |
Llama 4 Scout meta-llama/llama-4-scout-17b-16e-instruct | Very fast | Great |
Llama 3.3 70B meta-llama/llama-3.3-70b-versatile | Fast | Very good |
Qwen3 32B qwen/qwen3-32b | Very fast | Very good |
Kimi K1.5 moonshotai/kimi-k1.5-32k | Fast | Good |
Llama 3.1 8B meta-llama/llama-3.1-8b-instant | Extremely fast | Good |
Free tier
Groq has a generous free tier with daily and monthly token limits per model. For most Capsul users the free tier is sufficient — each app generation is a single API call of roughly 1,000–4,000 output tokens. You can check your usage at console.groq.com/usage.