The dashboard is divided into overall quality indicators and detailed quality indicators. Keep in mind that all data always refers to the filter that was applied in the filter area.

You can see more about filtering in the dashboard here.

Be mindful of whether you are looking at “given feedback” or “received feedback”, and thus, whether you are looking at reviewees or reviewers!

Overall Quality Indicators

These indicators represent the main KPI metrics: 

  • Internal quality score: the average of all scores received OR given over the chosen period
  • Period-over-period change: the improvement (or lack thereof) compared to the previous period
  • Number of tickets rated
  • Numbers of comments received OR given
  • Data points related to the current period: e.g. daily data points for the weekly quality score

Which questions do these KPI's answer? 

  • What is our current quality score? 
  • How is the score changing over time? 
  • How active are our reviewers? 
  • How many comments are being left compared to the number of tickets rated?

Detailed Quality Indicators: Ratings by Category

The Ratings by Category dashboard card shows the number of ratings either given OR received and the corresponding scores. This means it allows you to both get an overview over the work reviewees are doing or the ratings that reviewees are receiving.

Which questions does this help to answer?

When “reviews received” is selected: 

  • Who is receiving how many reviews? 
  • Who is being rated much higher/much lower than their peers?
  • Do we need training for specific people on the team? 

When “reviews given” is selected: 

  • Who is giving how many reviews? 
  • Who is rating much higher/much lower than their peers?
  • Do we need to conduct another calibration session?

In general:

  • How many ratings are being given per category? 
  • If using n/a, are any of these categories rarely rated/often skipped? 
  • How many comments are given? How do comments compare to the number of ratings given? 
  • Which category receives the lowest quality score? Which category receives the highest score? Do we need training material/sessions for specific categories?

Detailed Quality Indicators: Personal Quality Scores given over time

This card shows the evolution of quality ratings over time - either by reviewer or by agent receiving these reviews. You can see the evolution either as a table or as a graph. 

This graph is ONLY available for Workspace Reviewers and above. Agents can not see this table. 

Which questions does this help to answer?

When “reviews received” is selected: 

  • How strict are reviewers with their ratings? 
  • Are there any outliers in terms of general reviews? Do we need to do calibration sessions to make sure reviewers are comparable? 

When “reviews given” is selected: 

  • How are individual agents performing over time? 
  • Are there agents that require specific training? 

Detailed Quality Indicators: Scores by Category

This card shows the evolution of quality ratings over time, by categories. You can see the evolution either as a table or as a graph. 

Which questions does this help to answer?

  • How are my categories developing over time? 
  • Have there been any specific dates where a certain category dipped or improved? 
  • You can filter for a specific custom filter or an agent if you want to see individual data per person. 

Detailed Quality Indicators: Scores and Comments by ticket

This table shows the review ID, a link to the ticket on the helpdesk, comments given and the scores per category. Note that these, too, can be filtered by reviewer or by reviewee. 

Which questions does this help to answer?

When filtering for a specific agent, you can dive into which tickets they excelled on, and where they could improve. Note that you can order the table by average or by individual categories in order to see the highest or lowest scores up top. 

When “reviews received” is selected: 

  • Which tickets specifically has this agent (or this selection of people) been rated on? 
  • Are there any outliers in these tickets? 

When “reviews given” is selected: 

  • How are reviewers rating specific tickets?
Did this answer your question?