What is calibration?


Calibration is the practice of all of your reviewers grading the same batch of tickets and comparing what scores and comments they left. This is something which should be done regularly to make sure that your reviewers are on the same page when giving reviews. This means that your agents will receive similar reviews each week irrelevant of who they are being reviewed by.

How do Klaus users do it?

Klaus users who are already doing calibration do it quite differently.

For example, one team does monthly calibrations with their support managers who are responsible for leaving reviews.

On the other hand, another team is doing peer reviews and includes all agents in the calibration process on a quarterly basis.


How can you set up Klaus for calibration?

The best way to set up calibration in Klaus is to create a separate account and call it something like 'Calibration'. This ensures that your agents aren't able to see these reviews.

Note: This second account will be free of charge. Get in touch with our support for help with setting this up.

Once you have created the new account, you should:

  1. Connect your help desk to this new account
  2. Invite your reviewers and set their role to Workspace Reviewer
  3. Set up your scorecard the same as it is in your main account
  4. Create filters for your team to use to find the right tickets to calibrate. Until we roll out the ticket assignment feature, for now the easiest way is to tag about 5-10 tickets in your help desk with a specific tag, e.g. “calibration-dec2019”.

    Create a filter with just these tickets and ask all of your reviewers to review the tickets in this filter. Since they can’t see what others rated (while reviewing), every reviewer can approach each ticket with a fresh perspective.


Once you have completed these steps, you are able to calibrate as a team. The scores and comments of each reviewer will remain private and won't be visible to users with the role Workspace Review or Agent.

The dashboard allows you to both review the scores your agents received as well review how your reviewers are scoring. This makes it much easier to do calibration sessions and to quickly spot-check whether your reviewers are aligned around your rating categories.

You can see the result on the dashboard (the place to analyze data), switching all data to “given feedback” in the filter row. The dashboard card “rating by reviewers” now allows you to compare reviewers based on how severe or how benevolent they are rating. If someone is giving significantly higher or lower ratings than everyone else, it’s time to organize a calibration session to make sure reviews are fair and comparable across the board.

Did this answer your question?