Ingest
Purpose¶
Ingest single or batch LLM runs into Convex runs_live with optional metadata.
Auth¶
Authorization: Bearer <INGEST_API_KEY>or headerx-api-key(same value in OSS).- Body must include
projectId. - TODO: Switch to per‑project API keys derived via
convex/apiKeys.ts.
Request¶
- Single: body must include
projectIdand telemetry fields. - Batch:
{ projectId?: string; runs: LLMRunInput[] }.
// excerpt of LLMRunInputSchema (see app/api/ingest/route.ts)
{
id?: string,
provider: string,
model: string,
// privacy: do not send prompts or completions
prompt?: string,
completion?: string,
tokens?: { input: number, output: number },
cost?: number,
latency?: number,
timestamp?: string,
metadata?: Record<string, any>,
inputTokens?: number,
outputTokens?: number,
costUSD?: number,
latencyMs?: number,
status?: 'success'|'error'|'timeout',
errorCode?: string,
promptHash?: string,
promptPreview?: string, // omit in privacy‑first mode
experimentId?: string,
traceId?: string,
idempotencyKey?: string,
runId?: string,
costSource?: 'provider'|'catalog'|'estimated',
costEstimated?: boolean,
}
Cost recompute¶
- OpenRouter: trust
usage.total_costif provided (setscostSource='provider',costEstimated=false). - Other providers: server computes via pricing registry (sets
costSource='catalog';costEstimatedreflects pricing entry).
Response¶
- Single:
{ ok: true, id: string } - Batch:
{ ok: true, processed: number, ids: string[] }
Examples¶
curl -X POST $BASE/api/ingest \
-H "Authorization: Bearer $INGEST_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"projectId": "<projectId>",
"provider": "openai", "model": "gpt-4o",
"inputTokens": 100, "outputTokens": 50,
"costUSD": 0, "latencyMs": 1200, "status": "success"
}'
Notes
- Do not send prompts or completions; only usage metadata.
- Idempotency: runId is honored (Convex checks by run id). idempotencyKey is accepted but not enforced yet. TODO: enforce idempotencyKey in Convex/route.
See also: ../convex/runs.md