Quickstart
1) Create a Project and an Ingest Key¶
- Create a project in the UI (Settings → Projects) and note its
projectId. - For the OSS template, set a single server ingest key:
INGEST_API_KEY. - TODO: Wire per‑project API keys in
/app/api/ingest/route.tsusingconvex/apiKeys.ts.
2) Install dependencies¶
pnpm i
pnpm dev
# Optional: Postgres
docker compose up -d postgres
pnpm prisma migrate dev --name init && pnpm prisma generate
# Convex (realtime)
npx convex dev
3) Set environment¶
- Server:
INGEST_API_KEY,NEXT_PUBLIC_CONVEX_URL, optionalRUNFORGE_SYNC_URL/RUNFORGE_SYNC_SIGN. - SDK:
RUNFORGE_API_KEY(same asINGEST_API_KEYfor local dev),RUNFORGE_ENDPOINT(defaults tohttp://localhost:3000/api/ingest),RUNFORGE_PROJECT_ID. - Provider keys for examples:
OPENAI_API_KEY,OPENROUTER_API_KEY.
4) Track a run — TypeScript (auto‑extraction)¶
Option A — 2‑line wrapper
import OpenAI from 'openai'
import { withRunForge } from '../sdk-ts/index'
const openai = withRunForge(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }))
const res = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'hi' }],
})
Option B — explicit tracker
import OpenAI from 'openai'
import { RunForge } from '../sdk-ts/index'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! })
const rf = new RunForge({
apiKey: process.env.RUNFORGE_API_KEY!,
endpoint: process.env.RUNFORGE_ENDPOINT || 'http://localhost:3000/api/ingest',
projectId: process.env.RUNFORGE_PROJECT_ID,
})
await rf.track({ model: 'gpt-4o-mini', experiment: 'quickstart' }, () =>
openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: 'user', content: 'hi' }] })
)
// Tokens, cost, latency auto‑extracted; no prompts/outputs sent to RunForge
5) Track a run — Python (auto‑extraction)¶
Option A — 2‑line wrapper
from openai import OpenAI
from runforge import with_runforge
import os
client = with_runforge(OpenAI(api_key=os.environ['OPENAI_API_KEY']))
res = client.chat.completions.create(model='gpt-4o-mini', messages=[{"role":"user","content":"hi"}])
Option B — explicit tracker
from runforge import RunForge
from openai import OpenAI
import os
rf = RunForge(api_key=os.environ['RUNFORGE_API_KEY'], endpoint=os.environ.get('RUNFORGE_ENDPOINT','http://localhost:3000/api/ingest'), project_id=os.environ.get('RUNFORGE_PROJECT_ID'))
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
rf.track({"model": "gpt-4o-mini", "experiment": "quickstart"}, lambda: client.chat.completions.create(model='gpt-4o-mini', messages=[{"role":"user","content":"hi"}]))
6) Streaming tip (OpenAI)¶
- Use
stream: trueandstream_options: { include_usage: true }to receive final usage in the last chunk. - For streamed text, you can compute output tokens via
countStreamingOutputTokens(model, fullText)and POST once with final metrics.
7) See it live¶
- Open
/runsand/dashboard— updates arrive within seconds via Convex.
8) Sample data¶
- Run the sample generator to populate dashboards:
Notes
- Do not send prompts or completions to /api/ingest. Optional: send promptHash only.
- In OSS template, Authorization: Bearer <INGEST_API_KEY> is required and projectId must be provided.
- TODO: Add per‑project API key validation and idempotency keys to route handler.
See also
- Examples:
- examples/openrouter-node/index.ts
- examples/openai-node/index.ts
- examples/openai-python/main.py
- Integrations guides: docs/integrations/*