KPI Reference Guide
This comprehensive reference guide provides detailed information about every Key Performance Indicator (KPI) available in the Analytics dashboard.
Last updated 11 days ago
Understanding KPIs
What is a KPI?
A Key Performance Indicator (KPI) is a measurable value that demonstrates how effectively your organization is using AI. Each KPI card in the dashboard shows:
Current value — The metric for the selected period
Trend badge — Comparison to the previous equivalent period
Visual indicator — Color-coded to show positive/negative trends
How Trends Are Calculated
Trends compare the current period to the previous equivalent period:
Trend formula: ((Current - Previous) / Previous) × 100
Overview Dashboard KPIs
Active Users
Definition: The number of unique organization members who sent at least one message during the selected period.
Formula:
COUNT(DISTINCT user_id WHERE messages_sent >= 1)What it measures:
Platform adoption
User engagement
Active user base size
Good trend: 🟢 Increasing (more users adopting AI)
Example:
Period: Last 30 days
Active Users: 45
Trend: +12% (4 more users than previous 30 days)
Usage Intensity
Definition: The average number of messages sent per active user.
Formula:
Total messages / Active usersWhat it measures:
How deeply users engage with AI
Average workload per user
Platform stickiness
Good trend: 🟢 Increasing (users are engaging more deeply)
Example:
Total messages: 1,350
Active users: 45
Usage Intensity: 30 messages/user
Interpretation:
<10: Light usage
10-30: Moderate usage
30-50: Heavy usage
50: Power user behavior
Mates per User
Definition: The average number of different Mates used per active user.
Formula:
COUNT(DISTINCT mate_id) / COUNT(DISTINCT user_id)What it measures:
Diversity of AI usage
Mate discovery
Platform exploration
Good trend: 🟢 Increasing (users exploring more Mates)
Example:
Distinct Mates used: 135
Active users: 45
Mates per User: 3.0
Interpretation:
1-2: Users stick to familiar Mates
3-5: Good exploration
5: High diversity, users leveraging specialized Mates
Growth
Definition: The percentage change in active users compared to the previous period.
Formula:
((Current active users - Previous active users) / Previous active users) × 100What it measures:
Adoption velocity
Platform momentum
User acquisition success
Good trend: 🟢 Positive growth
Example:
Current period: 45 active users
Previous period: 40 active users
Growth: +12.5%
Engagement Dashboard KPIs
Conversations / User
Definition: The average number of conversation sessions per active user.
Formula:
Total conversations / Active usersWhat it measures:
Session frequency
How often users return to the platform
Engagement depth
Good trend: 🟢 Increasing (users having more sessions)
Example:
Total conversations: 225
Active users: 45
Conversations / User: 5.0
Interpretation:
1-3: Occasional use
4-7: Regular use
7: Frequent, habitual use
Messages / User
Definition: The average number of messages sent per active user.
Formula:
Total messages / Active usersWhat it measures:
Overall engagement level
Platform usage intensity
User activity
Good trend: 🟢 Increasing (users engaging more)
Example:
Total messages: 1,350
Active users: 45
Messages / User: 30
Mates Explored / User
Definition: The average number of distinct Mates used per active user.
Formula:
COUNT(DISTINCT mate_id per user) / COUNT(DISTINCT user_id)What it measures:
Mate discovery
Usage diversity
Platform exploration
Good trend: 🟢 Increasing (users trying more Mates)
Example:
User A used 3 Mates
User B used 5 Mates
User C used 2 Mates
Average: (3+5+2)/3 = 3.3 Mates/User
Activation Rate
Definition: The percentage of organization members who are actively using the platform.
Formula:
(Active users / Total organization members) × 100What it measures:
Platform adoption
Onboarding success
User activation
Good trend: 🟢 Increasing (more members becoming active)
Example:
Active users: 45
Total members: 60
Activation Rate: 75%
Interpretation:
<30%: Low adoption — needs attention
30-60%: Moderate adoption
60-80%: Good adoption
80%: Excellent adoption
Mates Dashboard KPIs
Active Mates
Definition: The number of Mates that received at least one request during the period.
Formula:
COUNT(DISTINCT mate_id WHERE requests >= 1)What it measures:
Mate utilization
Platform diversity
Mate portfolio health
Good trend: 🟢 Increasing (more Mates being used)
Example:
Total Mates in organization: 20
Active Mates: 12
Utilization: 60%
Requests
Definition: The total number of messages sent to Mates (Mate invocations).
Formula:
COUNT(messages WHERE recipient_type = 'mate')What it measures:
Total AI workload
Mate demand
Platform usage volume
Good trend: 🟢 Increasing (more AI usage)
Example:
Requests: 1,125
This represents all messages directed to Mates
Users (Mates Dashboard)
Definition: The number of unique users who interacted with at least one Mate.
Formula:
COUNT(DISTINCT user_id WHERE mate_requests >= 1)What it measures:
Mate adoption
User reach
Platform penetration
Good trend: 🟢 Increasing (more users using Mates)
Tokens / Response
Definition: The average number of tokens consumed per AI response.
Formula:
Total tokens / Total AI responsesWhat it measures:
Response efficiency
Token optimization
Cost per response
Good trend: 🟢 Decreasing (more efficient responses) or stable
Example:
Total tokens: 2,250,000
AI responses: 1,125
Tokens / Response: 2,000
Interpretation:
<1,000: Very concise responses
1,000-3,000: Normal responses
3,000-5,000: Detailed responses
5,000: Very verbose responses (may need optimization)
Usage Dashboard KPIs
Total Tokens
Definition: The sum of all input and output tokens consumed during the period.
Formula:
SUM(input_tokens + output_tokens)What it measures:
Total AI consumption
Platform usage volume
Cost driver
Good trend: Depends on context
🟢 Increasing = more usage (good for adoption)
🔴 Increasing = higher costs (may need optimization)
Example:
Input tokens: 900,000
Output tokens: 1,350,000
Total Tokens: 2,250,000
Estimated Cost
Definition: The approximate cost in USD based on public pricing from LLM providers.
Formula:
SUM(tokens × model_price_per_token)What it measures:
AI spending
Budget consumption
Cost trends
Good trend: 🟢 Stable or decreasing per user
Example:
Total tokens: 2,250,000
Average price: $0.002 per 1K tokens
Estimated Cost: $4.50
Important notes:
Based on public pricing (actual costs may vary)
Does not include volume discounts
Does not include custom pricing agreements
Tokens / Message
Definition: The average number of tokens consumed per message (human + AI).
Formula:
Total tokens / Total messagesWhat it measures:
Message efficiency
Context size
Optimization opportunity
Good trend: 🟢 Stable or decreasing (more efficient)
Example:
Total tokens: 2,250,000
Total messages: 1,350
Tokens / Message: 1,667
Interpretation:
<1,000: Very efficient
1,000-2,500: Normal
2,500-5,000: High context (long conversations or large prompts)
5,000: Very high (may need optimization)
Cost / User
Definition: The average estimated cost per active user.
Formula:
Estimated total cost / Active usersWhat it measures:
Per-user spending
Cost efficiency
Budget planning
Good trend: 🟢 Stable or decreasing
Example:
Estimated cost: $4.50
Active users: 45
Cost / User: $0.10
Interpretation:
<$0.50/user: Very cost-efficient
$0.50-$2/user: Normal
$2-$5/user: High usage or expensive models
$5/user: Very high (review usage patterns)
Efficiency Score
Definition: A score out of 100 that evaluates token efficiency based on output-to-input ratio and response length variance.
Formula:
100 - ((output_ratio - 2.0) × 20) - (variance_penalty)Factors:
Output-to-input ratio (ideal: 1.5-2.5)
Response length consistency
Token waste indicators
What it measures:
Overall token optimization
Response quality vs. verbosity
Cost efficiency
Score interpretation:
80-100: Excellent efficiency
60-79: Verbose responses (optimization recommended)
<60: High output ratio (review Mate instructions)
Tools Dashboard KPIs
Tool Calls
Definition: The total number of tool invocations during the period.
Formula:
COUNT(tool_calls)What it measures:
Tool usage volume
External integration activity
Mate capabilities utilization
Good trend: 🟢 Increasing (more tool usage = more advanced workflows)
Success Rate
Definition: The percentage of tool calls that completed successfully.
Formula:
(Successful calls / Total calls) × 100What it measures:
Tool reliability
Integration health
User experience quality
Good trend: 🟢 High and stable (>95%)
Example:
Total calls: 500
Successful calls: 475
Success Rate: 95%
Interpretation:
95%: Excellent reliability
90-95%: Good (monitor for issues)
80-90%: Moderate (investigate failures)
<80%: Poor (immediate action needed)
Average Duration
Definition: The average response time for tool calls in seconds.
Formula:
SUM(tool_call_duration) / COUNT(tool_calls)What it measures:
Tool performance
User experience
API responsiveness
Good trend: 🟢 Low and stable (<2s)
Example:
Total duration: 1,250 seconds
Total calls: 500
Average Duration: 2.5s
Interpretation:
<0.5s: Instant (excellent UX)
0.5-2s: Fast (good UX)
2-5s: Medium (acceptable)
5-10s: Slow (optimization recommended)
10s: Very slow (poor UX, needs attention)
Total Cost (Tools)
Definition: The total cost of tool calls in USD.
Formula:
SUM(tool_call_cost)What it measures:
Tool spending
External API costs
Budget consumption
Good trend: 🟢 Stable or decreasing per call
Tools / Message
Definition: The average number of tool calls per agent message.
Formula:
Total tool calls / Total agent messagesWhat it measures:
Tool dependency
Workflow complexity
Automation level
Example:
Tool calls: 500
Agent messages: 1,125
Tools / Message: 0.44
Interpretation:
<0.3: Low tool usage (mostly conversational)
0.3-0.7: Moderate tool usage (balanced)
0.7-1.5: High tool usage (tool-heavy workflows)
1.5: Very high (multiple tools per response)
Errors Dashboard KPIs
Message Errors
Definition: The number of messages that encountered an error during processing.
Formula:
COUNT(messages WHERE status = 'error')What it measures:
LLM reliability
Message processing health
User experience issues
Good trend: 🟢 Low and decreasing
Tool Errors
Definition: The number of tool calls that failed.
Formula:
COUNT(tool_calls WHERE status = 'error')What it measures:
Tool reliability
Integration health
External API issues
Good trend: 🟢 Low and decreasing
Global Error Rate
Definition: The percentage of all operations (messages + tool calls) that resulted in an error.
Formula:
((Message errors + Tool errors) / (Total messages + Total tool calls)) × 100What it measures:
Overall system reliability
User experience quality
Platform health
Good trend: 🟢 Low (<5%)
Example:
Message errors: 15
Tool errors: 25
Total messages: 1,350
Total tool calls: 500
Global Error Rate: (40 / 1,850) × 100 = 2.16%
Interpretation:
<5%: Excellent reliability
5-10%: Moderate (monitor closely)
10%: Critical (immediate action needed)
Impacted Users
Definition: The number of unique users who encountered at least one error.
Formula:
COUNT(DISTINCT user_id WHERE errors >= 1)What it measures:
Error reach
User experience impact
Support workload
Good trend: 🟢 Low and decreasing
Credits Dashboard KPIs
Credits Consumed
Definition: The total Polar units consumed across all meters.
Formula:
SUM(consumed_credits per meter)What it measures:
Quota consumption
Billing usage
Resource utilization
Good trend: 🟢 Within allocated limits
Credits Remaining
Definition: The total Polar units still available across all meters.
Formula:
SUM(allocated_credits - consumed_credits per meter)What it measures:
Available quota
Buffer before limit
Planning headroom
Good trend: 🟢 Sufficient buffer (>20%)
LLM Credit Cost
Definition: The cost of LLM tokens consumed via native connections (allmates.ai-managed).
Formula:
SUM(tokens × credit_rate for native connections)What it measures:
Native LLM spending
Token cost via platform
Managed connection usage
Tool Credit Cost
Definition: The cost of tool calls via native connections (allmates.ai-managed).
Formula:
SUM(tool_calls × credit_rate for native connections)What it measures:
Native tool spending
Tool cost via platform
Managed tool usage
Attachments Dashboard KPIs
Files Uploaded
Definition: The total number of files uploaded during the period.
Formula:
COUNT(file_uploads)What it measures:
File usage volume
Content-based workflows
Storage demand
Good trend: 🟢 Increasing (more file-based work)
Total Storage
Definition: The total storage consumed by uploaded files (in GB or MB).
Formula:
SUM(file_size)What it measures:
Storage consumption
Infrastructure cost
Data volume
Good trend: 🟢 Stable or growing predictably
Tokens Extracted
Definition: The total number of tokens parsed from files via RAG (Retrieval-Augmented Generation).
Formula:
SUM(tokens_extracted_from_files)What it measures:
Content extraction volume
RAG usage
File processing workload
Good trend: 🟢 Increasing (more content being processed)
Error Rate (Attachments)
Definition: The percentage of files that encountered processing errors.
Formula:
(Files with errors / Total files) × 100What it measures:
File processing reliability
Format compatibility
Processing pipeline health
Good trend: 🟢 Low (<5%)
Interpretation:
<5%: Excellent processing
5-10%: Moderate (check unsupported formats)
10%: High (investigate processing issues)