Model Configuration
Configure the AI model used by this assistant.

Model Picker
Select from available models categorized by tier:
| Tier | Use Case |
|---|---|
| Premium | Complex reasoning tasks requiring the best accuracy |
| Standard | General-purpose assistance |
| Budget | Simple, high-volume tasks where cost efficiency matters |
Parameters
| Setting | Description |
|---|---|
| Temperature | Response randomness (0 = Focused, 1 = Balanced, 2 = Creative) |
| Max Tokens | Maximum response length (1–8192) |
Model Details Panel
When a model is selected, a details panel shows:
| Field | Description |
|---|---|
| Provider | The LLM provider (e.g., Anthropic, OpenAI) |
| Context Window | Maximum input context size |
| Input Cost | Cost per 1M input tokens |
| Output Cost | Cost per 1M output tokens |
| Description | Brief overview of the model's capabilities |
Cost Estimation
An estimated monthly cost is calculated based on the selected model's pricing. Use this to understand the financial impact of your model choice before committing.
Model Comparison
A side-by-side comparison table shows all available models with their key characteristics, making it easy to evaluate trade-offs between cost, speed, and capability.
Best Practices
- Use Budget models for simple, high-volume tasks
- Use Standard models for general-purpose assistance
- Use Premium models for complex reasoning tasks
- Check the cost estimation to understand pricing impact
- Lower temperature for factual/deterministic tasks, higher for creative tasks