All Collections
Getting Started
Klaus Onboarding
Analyze the results - Dashboard Indicator Cards
Analyze the results - Dashboard Indicator Cards

Understanding the Dashboard Indicator Cards

Daniel Figueiredo avatar
Written by Daniel Figueiredo
Updated over a week ago

The Overview Dashboard is divided into overall quality indicators and detailed quality indicators. Keep in mind that all data always refers to the filter that was applied in the filter area. You can see more about filtering in the dashboard here.

Be mindful of whether you are looking at “given feedback” or “received feedback”, and thus, whether you are looking at the performance of reviewees or reviewers!


Overall Quality Indicators

These indicators represent the main KPI metrics: 

  • Internal quality score: the average of all review scores received OR given over the chosen period

  • Period-over-period change: the improvement (or lack thereof) compared to the previous period

  • CSAT in the selected time period

  • Number of conversations that received a rating

  • Number of comments received OR given on all conversations within this period/filter (replies in a thread are not taken into account)

  • Data points related to the current period: e.g. daily data points for the weekly quality score

Which questions do these KPIs answer? 

  • What is our current quality score? 

  • How is the score changing over time? 

  • How active are our reviewers? 

  • How many comments are being left compared to the number of conversations rated?

Detailed Quality Indications: Users

Users card gives the full overview of all your Klaus users and their individual KPIs.

Which questions do these KPIs answer?

Reviews Received

Reviews Given

Who is receiving how many reviews?

Who is giving how many reviews?

Who is getting higher/lower scores than their peers?

Who is giving higher/lower scores than their peers?

  • What's the IQS score for each user?

  • How is the score changing over time per agent?

  • How active are our reviewers?

  • How many conversations have been reviewed?

  • How many reviews were done compared to the number of conversations? Conversation could be reviewed by multiple reviewers.

  • How many comments are given? How do comments compare to the number of reviews given?


Detailed Quality Indicators: Scores by Category

The Scores by Category dashboard card shows the combined average of all category scores (given or received) per individual category for each agent. This means it allows you to both get an overview over the work reviewees are doing or the ratings that reviewees are receiving.

Which questions does this help to answer?

Reviews Received

Reviews Given

Who is being rated much higher/much lower than their peers?

Who is rating much higher/much lower than their peers?

Do we need training for specific people on the team?

Do we need to conduct another calibration session?

In general:

  • Which category receives the lowest quality score?

  • Which category receives the highest score?

  • Do we need training material/sessions for specific categories?


Detailed Quality Indicators: Scores over time

This card shows the evolution of quality scores over time - either by reviewer or by agent receiving these reviews. You can see the evolution either as a table or as a graph. 

This graph is ONLY available for Workspace Reviewers and above. Agents can not see this table. 

Which questions does this help to answer?

Reviews Received

Reviews Given

How strict are reviewers with their ratings?

How are individual agents performing over time?

Are there any outliers in terms of general reviews? Do we need to do calibration sessions to make sure reviewers are comparable?

Are there any agents that require specific training?

Detailed Quality Indicators: Category Scores over time

This card shows the evolution of quality ratings over time, by categories. You can see the evolution either as a table or as a graph. 

Which questions does this help to answer?

  • How are my categories developing over time?

  • How many ratings are being given per category?

  • If using n/a, are any of these categories rarely rated/often skipped? 

  • Have there been any specific dates where a certain category dipped or improved? 

  • You can filter for a specific custom filter or an agent if you want to see individual data per person. 

Detailed Quality Indicators: Scores and Comments by conversations

This table shows the review ID, a link to the conversation on the help desk, comments given and the scores per category. Note that these, too, can be filtered by reviewer or by reviewee. 

Which questions does this help to answer?

When filtering for a specific agent, you can dive into which conversations they excelled on, and where they could improve. Note that you can order the table by review scores or by individual categories in order to see the highest or lowest scores up top. 

Reviews Received

Reviews Given

Which conversations specifically has this agent (or this selection of people) been rated on?

How are reviewers rating specific conversations?

Are there any outliers in these conversations?

Did this answer your question?