OpenAI (Node)
Behavior
OpenAI responses include token counts in usage.prompt_tokens and usage.completion_tokens but not cost.
Client can optionally estimate with tokencost; server verifies and computes authoritative cost from pricing registry (costSource="catalog").
Install
Minimal example
import OpenAI from 'openai'
import { RunForge } from '../../sdk-ts/index'
const client = new OpenAI ({ apiKey : process.env.OPENAI_API_KEY ! })
const rf = new RunForge ({ apiKey : process.env.RUNFORGE_API_KEY ! , endpoint : process.env.RUNFORGE_ENDPOINT , projectId : process.env.RUNFORGE_PROJECT_ID })
await rf . track ({ model : 'gpt-4o-mini' , experiment : 'openai-demo' }, () =>
client . chat . completions . create ({ model : 'gpt-4o-mini' , messages : [{ role : 'user' , content : 'hi' }] })
)
Streaming (final usage)
const stream = await client . chat . completions . create ({
model : 'gpt-4o-mini' ,
messages : [{ role : 'user' , content : 'hi' }],
stream : true ,
stream_options : { include_usage : true }
})
// On the last chunk, read usage and then POST a single ingest event
Troubleshooting
Keys: OPENAI_API_KEY, RUNFORGE_API_KEY, RUNFORGE_PROJECT_ID.
Missing usage: ensure latest OpenAI SDK and include_usage for streams.
CORS: call from server contexts.