Skip to content

Python

Install

pip install openai
from runforge import with_runforge
from openai import OpenAI
import os

client = with_runforge(OpenAI(api_key=os.environ['OPENAI_API_KEY']))
res = client.chat.completions.create(model='gpt-4o-mini', messages=[{"role":"user","content":"hi"}])

Explicit tracker API

class RunForge:
  def __init__(self, api_key: str, endpoint: str, project_id: Optional[str] = None): ...
  def track(self, metadata_or_fn, maybe_fn=None): ...
- Wrap sync or async LLM calls. - Auto‑extracts usage from responses; trusts OpenRouter usage.total_cost and estimates others; server verifies and computes authoritative cost. - Sends only usage metadata to /api/ingest. Do not send prompts/outputs.

Example (explicit tracker)

from runforge import RunForge, with_runforge
from openai import OpenAI
import os

client = with_runforge(OpenAI(api_key=os.environ['OPENAI_API_KEY']))
res = client.chat.completions.create(model='gpt-4o-mini', messages=[{"role":"user","content":"hi"}])

Streaming

  • For OpenAI, use stream=True and stream_options={"include_usage": True} (when available) to receive final usage in the last chunk.

Error & retries

  • Exceptions are tracked with status='error' and error code; your application error is re‑raised.
  • Use runId for idempotency when retrying application calls.

Privacy

  • Do not send prompts/outputs; the SDK and route are designed to handle usage metadata only.

See also: 05-apis.md, sdk-auto-extraction-guide.md