SimCapture Enterprise with Exam System: How to generate and interpret the Inter-rater consistency report
Discover how to generate and interpret a report on Inter-rater consistency using SimCapture Enterprise with Exam System.
Table of Contents
In the Skill area column of this report, "n=" indicates the number of evaluations completed by the user contributing to their consistency score. An n of 2 is far less reliable than an n of 200.
You can find the Inter-rater Consistency report by clicking on Reports in the global navigation bar. This report visually displays the agreement among subjective ratings from multiple raters.
Users can select a date range (Last 30 days, Last 90 days, Year to Date, or All Time) and choose from multiple organizations, courses, scenarios, or evaluations. The report will aggregate the data, which can then be exported to an Excel spreadsheet by clicking Export Data > Inter-rater Consistency.
Note: The downloaded Excel sheet will format the Username as Last Name, First Name, and Middle Name (if applicable).
IMPORTANT
When filtering by Organization, Course, or Scenario only, all Patient Evaluations, Monitor Evaluations, and Scoring Evaluations that meet these criteria will be included. However, Participant Evaluations and Course Evaluations will not be part of this calculation, as they pertain to the same individual.
Note: If your selections yield no useful data, a message will appear stating, "Based on your filters, there are no evaluations or meaningful analysis".
Once the report is generated, the Excel document will contain:
- Question categories included in the filter
- Standard Patient graders
- Z Score for each grader by Question Category
- Mean and Standard Deviation for each Question Category
- Overall Z Score by Grader
- Mean score for each grader
The export will be color-coded in red or green to indicate Z-scores that are 1 or 2 standard deviations above or below the mean, displayed in light and dark orange in the UI.
This information will be presented over two pages, with the second page detailing the filters applied during the export.
Z-Scores
A Z-Score, color-coded in the export, indicates a score's position on a normal distribution curve and illustrates its relationship to the mean of a group of values. These scores are significant as they allow for comparison between two scores that may not belong to the same normal distribution.
Z-Score color codes and their meanings
The following color codes highlight significant differences between the expected value and the actual value:
- Dark Red: Indicates a Z-score of -2, meaning it is 2 standard deviations below the mean.
- Light Red: Indicates a Z-score of -1, meaning it is 1 standard deviation below the mean.
- Dark Green: Indicates a Z-score of 2, meaning it is 2 standard deviations above the mean.
- Light Green: Indicates a Z-score of 1, meaning it is 1 standard deviation above the mean.
Z-Score calculations
The Z-score is calculated in two ways when using this report:
- Evaluator and Question Categories: The Z-score is calculated as follows: [ \text{Z-score} = \frac{(\text{evaluator mean per QC}) - (\text{Mean of the QC})}{\text{Standard Deviation of the QC}} ]
- Evaluator and Overall Categories: The Z-score is calculated as follows: [ \text{Z-score} = \frac{(\text{evaluator mean across all their evals}) - (\text{Mean of ALL evals})}{\text{Standard Deviation of ALL evals}} ]