RunForge Documentation¶
Monitor and optimize your AI usage. Private by default. Built-in dashboards, alerts, and experiments to keep costs down and quality up.
title: RunForge Documentation status: current lastReviewed: 2025-08-17
Start here¶
- 👉 User Guide (non‑technical): User Guide
- ⚡ Developer Quickstart: Quickstart
Track LLM calls in 2 lines¶
Easily instrument your app without changing your code paths.
import OpenAI from 'openai'
import { withRunForge } from '../sdk-ts/index'
const openai = withRunForge(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }))
const res = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
})
from openai import OpenAI
from runforge import with_runforge
import os
client = with_runforge(OpenAI(api_key=os.environ['OPENAI_API_KEY']))
res = client.chat.completions.create(model='gpt-4o-mini', messages=[{"role":"user","content":"Hello!"}])
The wrapper auto‑extracts tokens, latency, and cost, and sends only usage metadata to /api/ingest (never prompts/outputs).
Who is this for?¶
- 👥 End users: Track costs, latency, and reliability in the User Guide
- 👨‍💻 Developers: Set up in minutes with the Quickstart and SDKs (TypeScript, Python)
Highlights¶
- Real‑time runs via Convex telemetry
- Durable analytics in PostgreSQL (Prisma)
- Experiments, alerts, and usage dashboards
- Privacy by design: no prompts or outputs ingested
See also: Overview, Data model, API reference
License¶
- Core: Apache-2.0 (see
/LICENSE) - SDKs: MIT (see
/sdk-ts/LICENSEand/sdk-py/LICENSE)