Skip to content

Understanding Your RunForge Dashboard

What You'll Learn

How to read and interpret your RunForge dashboard to make smart decisions about your AI usage and costs.

Dashboard Overview

Your dashboard is your mission control for AI monitoring. It shows you the most important information about your AI applications at a glance.

Key Metrics Explained

📊 Total Cost

What it shows: How much you've spent on AI calls in the selected time period Why it matters: Track your AI budget and identify cost trends What to look for: - Sudden spikes (might indicate a bug or unexpected usage) - Gradual increases (normal growth or need for optimization) - Zero costs (check if tracking is working properly)

âš¡ Average Response Time

What it shows: How long your AI calls take to complete (in milliseconds) Why it matters: Slow responses hurt user experience What to look for: - Under 1000ms: Excellent for most applications - 1000-3000ms: Good for most use cases
- Over 3000ms: May need optimization or different models

✅ Success Rate

What it shows: Percentage of AI calls that completed successfully Why it matters: Reliability is crucial for user experience What to look for: - Above 99%: Excellent reliability - 95-99%: Good, but monitor for patterns - Below 95%: Investigate errors immediately

🔄 Total Requests

What it shows: Number of AI calls made in the time period Why it matters: Understand usage patterns and scaling needs What to look for: - Growth trends over time - Usage spikes during certain hours/days - Unexpected drops (might indicate issues)

Time Period Controls

Changing Time Ranges

  1. Look for the time selector (usually in the top-right)
  2. Common options:
  3. Last 24 hours: For real-time monitoring
  4. Last 7 days: Weekly patterns and trends
  5. Last 30 days: Monthly budgeting and planning
  6. Custom range: Specific analysis periods

Best Practices

  • Daily monitoring: Check last 24 hours for immediate issues
  • Weekly reviews: Look at 7-day trends for optimization opportunities
  • Monthly planning: Use 30-day data for budgeting and capacity planning

Model and Provider Breakdown

Understanding Model Performance

Your dashboard shows performance by AI model:

GPT-4: - Higher cost, better quality - Good for complex tasks, creative writing - Typical cost: $15-30 per million output tokens

GPT-3.5-Turbo/GPT-4o-mini: - Lower cost, good performance - Great for simple tasks, high-volume applications - Typical cost: $0.15-2 per million tokens

Claude Models: - Competitive performance and pricing - Good for analysis and reasoning tasks

Cost Optimization Tips

  • Use cheaper models for simple tasks (summaries, basic Q&A)
  • Use expensive models for complex tasks (creative writing, analysis)
  • Monitor token usage - shorter prompts = lower costs

Project Comparison

Multiple Projects View

If you have multiple projects, you can: 1. Compare costs across different applications 2. Identify high-usage projects needing attention 3. Track performance differences between projects

Switching Between Projects

  1. Look for the project selector (usually in top navigation)
  2. Click to see dropdown of all your projects
  3. Select a project to view its specific data

Real-Time Monitoring

Live Updates

Your dashboard updates automatically when new AI calls happen: - Costs update within seconds - Metrics refresh automatically - Charts show the latest data points

What to Monitor

  • Sudden cost spikes: Might indicate runaway processes
  • Error rate increases: Could signal API issues or code problems
  • Latency spikes: May indicate performance problems

Alert Indicators

Visual Alerts

Look for: - 🔴 Red indicators: Immediate attention needed - 🟡 Yellow warnings: Monitor closely - 🟢 Green status: Everything normal

When to Take Action

Immediate action needed: - Success rate below 95% - Sudden 5x cost increase - Response times over 10 seconds

Monitor closely: - Gradual cost increases - Slowly increasing response times - Minor increases in error rates

Export and Reporting

Data Export Options

Most dashboards allow you to: 1. Download CSV: Raw data for spreadsheet analysis 2. Generate reports: Formatted summaries for stakeholders 3. Share snapshots: Screenshots or links for team communication

Regular Reporting

Daily: Quick health check of key metrics Weekly: Detailed performance and cost analysis Monthly: Budget reporting and optimization planning

Troubleshooting Dashboard Issues

No Data Showing

Possible causes: - No AI calls made yet - Wrong time period selected - Tracking not properly configured

Solutions: - Check if your applications are making tracked calls - Verify your time range includes when you made calls - Review your SDK integration setup

Inconsistent Metrics

Possible causes: - Multiple projects mixed together - Time zone differences - Caching delays

Solutions: - Ensure you're viewing the correct project - Check your time zone settings - Refresh the page or wait a few minutes

Advanced Features

Custom Experiments

Track different versions or configurations: - A/B test different prompts - Compare model performance - Monitor feature rollouts

Filtering and Drilling Down

  • Filter by time periods
  • Group by model or provider
  • View individual request details

Next Steps

Quick Reference

Healthy Metrics Ranges

  • Cost growth: Under 20% month-over-month (unless planned)
  • Response time: Under 2 seconds for most applications
  • Success rate: Above 99%
  • Token efficiency: Stable or improving over time

Red Flags to Watch For

  • Sudden 2x+ cost increases
  • Success rates dropping below 95%
  • Response times over 5 seconds
  • Unexplained traffic spikes