Smart LLM Routing: Using OpenRouter with Mithril for Cost-Optimized AI
How to build agents that automatically pick the cheapest or fastest LLM for each task, paid per-call through x402.
Not every agent task needs the most expensive LLM. A simple classification might use Haiku, while a complex analysis needs Opus. OpenRouter + Mithril lets your agent make this choice dynamically — and pay per-call for only what it uses.
Why LLM Routing Matters
LLM costs vary 100x:
An agent that uses Opus for everything costs 50x more than one that routes intelligently.
OpenRouter: The Universal LLM API
OpenRouter provides a single API endpoint that routes to any LLM. Instead of integrating with Anthropic, OpenAI, Google, and Meta separately, you call OpenRouter and specify the model.
With x402, your agent doesn't even need an OpenRouter API key. It just pays per-call.
Routing Strategy
function selectModel(task: string): string {
// Simple classification/extraction
if (task === "classify" || task === "extract")
return "anthropic/claude-haiku-4-5"
// Standard analysis/writing
if (task === "analyze" || task === "write")
return "anthropic/claude-sonnet-4-6"
// Complex reasoning/planning
if (task === "plan" || task === "reason")
return "anthropic/claude-opus-4-6"
// Default to cost-effective
return "anthropic/claude-sonnet-4-6"
}Implementation
async function callLLM(task: string, prompt: string) {
const model = selectModel(task)
const result = await m.pay({
url: "https://openrouter.ai/api/v1/chat/completions",
method: "POST",
body: JSON.stringify({
model,
messages: [{ role: "user", content: prompt }],
}),
})
return result.data.choices[0].message.content
}Cost Impact
A research agent that makes 100 LLM calls/day:
| Strategy | Daily Cost |
|---|
|----------|-----------|
| Always Opus | $5.00 |
|---|---|
| Smart routing | $0.40 |
Smart routing saves 60-92% on LLM costs by matching model capability to task complexity.
Advanced: Dynamic Model Selection
Let the agent itself decide which model to use:
async function smartCall(prompt: string) {
// Use a cheap model to classify the task
const classification = await callLLM(
"classify",
`Classify this task as simple/medium/complex: ${prompt}`
)
const model = classification.includes("complex")
? "anthropic/claude-opus-4-6"
: classification.includes("medium")
? "anthropic/claude-sonnet-4-6"
: "anthropic/claude-haiku-4-5"
return callLLM("analyze", prompt)
}The classification call costs ~$0.001 and can save $0.05+ on every subsequent call.
Monitoring
Track model usage in the Mithril dashboard: