Now in public beta

Observability for
your LLM APIs

Track costs, latency, and quality drift across every OpenAI, Groq, and Anthropic call — integrated in three lines of code.

See how it works →

No credit card required · 10,000 requests free

llm-drift-monitor-pi.vercel.app
Live
Total Requests
1,284
+12% this week
Total Cost
$0.0247
−8% from last week
Avg Latency
342ms
Stable
Error Rate
0.02%
All good
Mar 1Mar 5Mar 9Mar 13
3
Lines to integrate
$0
To get started
<3ms
SDK overhead
100%
Open source

Features

Everything you need,
nothing you don't.

Built for developers who ship AI products and need real visibility without extra complexity.

Cost tracking

See the exact cost of every API call. Broken down by model, endpoint, and time period. No more guessing your monthly bill.

Latency monitoring

Track response times across all your models. Catch slowdowns instantly and compare performance across providers.

Drift detection

Automated quality scoring with golden prompts. Know when your model outputs start degrading before users notice.

Simple integration

Three lines of code. Works as a drop-in wrapper around your existing OpenAI, Groq, or Anthropic client.

Multi-project

Manage all your AI projects in one place. Separate metrics, API keys, and settings per project.

Smart alerts

Set cost caps, latency thresholds, and error rate alerts. Get notified via webhook when something needs attention.

How it works

Set up in minutes,
not days.

1
Create your project

Sign up free, create a project, and get your API key. The whole process takes under two minutes.

2
Install the package

pip install llm-monitor. Works with Python 3.8+ and any OpenAI-compatible client.

3
Wrap your client

Add three lines to your existing code. Your API calls keep working exactly the same way.

4
Watch your dashboard

Costs, latency, and drift scores appear in real time. Set alerts, compare models, ship confidently.

1from llm_monitor import monitor
2import openai
3
4monitor.configure(
5 api_key="lmd_your_key",
6 project_id="your_project_id"
7)
8
9client = openai.OpenAI()
10tracked = monitor.wrap_openai(client)
11
12"color:#555"># Identical to your existing code
13response = tracked.chat.create(
14 model="gpt-4o",
15 messages=[{"role": "user", "content": "Hello"}]
16)
$ pip install llm-monitor

Pricing

Simple pricing

Start free. Scale as you grow. No hidden fees.

Starter
$0Free forever

Perfect for side projects and experiments.

  • 10,000 requests / month
  • 1 project
  • 7-day data history
  • Basic dashboard
  • Community support
MOST POPULAR
Pro
$29per month

For teams shipping production AI features.

  • 500K requests / month
  • Up to 10 projects
  • 90-day history
  • Drift detection
  • Webhook alerts
  • Email support
Enterprise
Custom

For large teams with custom requirements.

  • Unlimited requests
  • Unlimited projects
  • 1-year history
  • Self-hosted option
  • SLA guarantee
  • Dedicated support

Start monitoring your
LLMs today.

Free to start. No credit card needed. Takes five minutes to set up.

View on GitHub