All Collections
Using Klaus
Dashboard
Dashboard Calculations Breakdown
Dashboard Calculations Breakdown

Learn how IQS is calculated

Daniel Figueiredo avatar
Written by Daniel Figueiredo
Updated over a week ago

The dashboard card Scores by Category focuses on the trend across categories for each agent. It looks specifically at the performance per category, not the performance per conversation. Differences between performance per category and performance per conversation can be caused by fail categories or differences in the number of ratings per category. 

The dashboard card "Scores by Reviews" allows you to look at individual reviews per agent.

Setup

Team Meow uses the following Rating Categories, with different weights attached to those. The Critical Categories only have a weight of 0.05, because they are either fail or pass.

What this means for the calculation:

The highest possible rating for a an agent is always 100%, as we calculate the score only based on the rated categories (and not the skipped N/A ones).

Where does the data on the dashboard come from? 

Scores by Category

This card compares scores by category per agent. It does not compare conversation averages for agents, but category averages. 

Each column shows the average of the category ratings given to a specific agent. The "average" column averages those averages.

Note that differences can arise when not all reviews have the same number of ratings. 

In the example below, the average of "Grammar" is 100%.

Relation of individual conversation scores to dashboard conversation scores

If you want to find out the exact conversation scores for your agent, use the last dashboard card (filtering for that agent) - Scores by Reviews.


If you want to find out the category scores for your agents, independent of fail categories, use the Scores by Categories card. 

The main difference here is that a negative fail category here will not fail the entire conversation. In order to allow for finding trends even if a fail category is triggered every single time, the calculation in these general cards treats each individual category as its own entity. 

This allows you to analyze those categories across agents, independent of the errors that they might have made in fail categories.

A practical example for Individual Review Scores

Let’s imagine the following scenario, with these 5 categories.

Agent A has received the following ratings:

The review score here is calculated: 

review_score = (cat1_score * cat1_weight + cat2_score* cat2_weight + ...) / (cat1_weight + cat2_weight + ...)

In this case, this means: 

review_score = (request_Score * 0.05 + clarification_score* 2 + explanation_score* 2 + writing_score* 0.5 + internal-data_score* 1  / (0.05 + 2 + 2 + 0.5 + 1) >> unless request_Score < 50%, then 0 %

In the Dashboard, these numbers will be rounded. It's possible to switch to displaying two decimal places from the View settings in the top of the page.

The review score of conversation 5 is 0, because the FAIL category automatically puts this to zero. 

Taking these same reviews together as category scores gives the following data: 

Agent A

In the Score column, average score for the Agent A is shown, across all categories that were rated. It is counted as the average of the row, meaning we count the average scores per category for that agent.

Why would we want that differentiation?

The average per category highlights opportunities for improvement far more explicitly than the average per conversation, before then diving into specifics themselves (in the last dashboard card). It’s like a 10,000m view of the performance instead of a granular view. 

Both calculations have their place and are used by different types of users looking at the Dashboard. 

How is IQS calculated?

The average of all reviews

IQS = (review1_score + review2_score + ….) / number of reviews

Review

Review Score

1

100.00%

2

9.91%

3

63.96%

4

90.99%

5

0%

IQS

52.97%

IQS = (100% + 9.91% + 63.96% + 90.99% + 0%) / 5 = 52.97%

How is Category score calculated?

How is CSAT calculated?


Normalise scores:

  • Binary scale (or Good, Bad) 100, 0

  • 3-point scale 100, 50, 0

  • 4-point scale 100, 66, 33, 0

  • 5-point scale 100, 75, 50, 25, 0

  • Divide the sum of all normalised responses by the sum of total possible maximum normalised scores.

Did this answer your question?