Skip to main content

Model Configuration

Configure the AI model used by this assistant.

LLM Settings

Model Picker

Select from available models categorized by tier:

TierUse Case
PremiumComplex reasoning tasks requiring the best accuracy
StandardGeneral-purpose assistance
BudgetSimple, high-volume tasks where cost efficiency matters

Parameters

SettingDescription
TemperatureResponse randomness (0 = Focused, 1 = Balanced, 2 = Creative)
Max TokensMaximum response length (1–8192)

Model Details Panel

When a model is selected, a details panel shows:

FieldDescription
ProviderThe LLM provider (e.g., Anthropic, OpenAI)
Context WindowMaximum input context size
Input CostCost per 1M input tokens
Output CostCost per 1M output tokens
DescriptionBrief overview of the model's capabilities

Cost Estimation

An estimated monthly cost is calculated based on the selected model's pricing. Use this to understand the financial impact of your model choice before committing.

Model Comparison

A side-by-side comparison table shows all available models with their key characteristics, making it easy to evaluate trade-offs between cost, speed, and capability.

Best Practices

  • Use Budget models for simple, high-volume tasks
  • Use Standard models for general-purpose assistance
  • Use Premium models for complex reasoning tasks
  • Check the cost estimation to understand pricing impact
  • Lower temperature for factual/deterministic tasks, higher for creative tasks