This card shows the ratings (given or received) per individual category for each agent, taking into account the time period selected in the filter. To understand the underlying values, keep in mind that even though you know the number of reviews, the number of individual ratings per category may be different. This is usually the case if you allow for n/a in your rating practice.

For example, in the above screenshot, the third agent has three reviews - however, the reviewer(s) might not have rated all categories. The average for each category is always calculated based on the numbers of ratings given for the specific category.

### Formula 1: Average score for this specific category

This cell shows the average score for this specific category and this user. It is calculated by adding up all ratings for this category and then dividing it by the number of ratings that were given. If a user got three ratings for category one, the average of these three ratings will constitute the category one.

We can represent this as: AVG(r(cat1)u1), where r = rating, cat = category and u = user.

### Formula 2: average score for this category, across all users

This cell shows the average score for this specific category across all users. This number isn’t necessarily the average of the above column, since the number of ratings received by each agent may have varied. If agent A has five ratings, and agent B has only one rating, this difference in number of ratings is taken into account. Hence, it’s the average of the ratings given in that category, not the average of the individual agents’ averages.

We can represent this as: AVG(r(cat1)), where r = rating, cat = category

### Formula 3: Weighted quality score for this individual agent

This is the quality score for this individual agent for the selected time period, across all categories that were rated. “Weighted” means that we take into account the weight of each category. If all of your categories have a weight of `1` this won’t affect you much, but if you use weight to give different importance to categories, this part is important.

We can represent this as: (AVG(r(cat1)u1)*w1 + AVG(r(cat2)u1)*w2 + AVG(r(cat3)u1)*w3 +… + AVG (r(catN)u1)*wN) / ((w1*N(r(cat1))+ (w2*N(r(cat2)) + (w3*N(r(cat3)) + … + (wn*N(r(catn))), where r = rating, cat = category, u = user, w = weight, N = number of ratings

### Formula 4: Quality score of all selected agents, without including "fails"

This number is equal to the quality score of all selected agents in the selected time frame. This means it takes into account for each category the number of ratings that were given and the weight it has before adding everything up.

Note that the average does not take into account that the agent may have failed some tickets entirely due to critical categories.That’s why the score here is usually higher than the agents’ individual score.

The formula can be represented as:  (AVG(scores)

## Example calculation

This Google Sheet includes example calculations. Click into the cells to see the exact formula.