LangChain
Where to wrap¶
- Wrap the final LLM call used by your chain/tool with
RunForge.track. - Example: in a tool’s
call()or where you invokellm.invoke().
Minimal example (TypeScript)¶
const rf = new RunForge({ apiKey: process.env.RUNFORGE_API_KEY!, endpoint: process.env.RUNFORGE_ENDPOINT, projectId: process.env.RUNFORGE_PROJECT_ID })
const out = await rf.track({ model: 'gpt-4o-mini', experiment: 'lc' }, () => llm.invoke('hi'))
Notes - Do not send prompts/outputs to RunForge. - For streaming chains, accumulate output and post once or rely on provider’s final usage if available.