The dashboard allows you to both review the scores your agents received as well review how your reviewers are scoring. This makes it much easier to do calibration sessions and to quickly spot-check whether your reviewers are aligned around your rating categories. 

While rating a ticket, the reviewer doesn't see previous ratings given to that specific ticket. This ensures a completely fresh perspective from each new reviewer, without previous ratings influencing (consciously or unconsciously) the current decision. It also means that you can now ask reviewers to review the same tickets to compare results. 

You can see the result on the dashboard (the place to analyze data), switching all data to “given feedback” in the filter row. The dashboard card “rating by reviewers” now allows you to compare reviewers based on how severe or how benevolent they are rating. If someone is giving significantly higher or lower ratings than everyone else, it’s time to organize a calibration session to make sure reviews are fair and comparable across the board.

How to do a calibration session? 

Until we roll out the ticket assignment feature, for now the easiest way is to tag about 5-10 tickets in your helpdesk with a specific tag, e.g. “calibration-dec2019”. Create a filter with just these tickets and ask all of your reviewers to review the tickets in this filter. Since they can’t see what others rated (while reviewing), every reviewer can approach each ticket with a fresh perspective. 

Once everybody has rated the ticket, compare the results and discuss where you see the biggest discrepancies.

Did this answer your question?