SimCapture Enterprise with Exam System: Inter-Rater Consistency Report
Discover how to generate and interpret a report on Inter-rater consistency using SimCapture Enterprise with Exam System.
Table of Contents
- SimCapture Cloud Release Notes
- SimCapture Cloud Administrator and Faculty Help
- SimCapture Cloud Learner Help
- SimCapture On-Premise Help
- SimCapture for Skills Help
- SimCapture Mobile Camera App Help
- SimCapture Companion Apps Help
- SimCapture Cloud Use Cases
- Integrations Help
- Hardware and Network Technical Information Help
- Glossary
The Inter-Rater Consistency Report helps you evaluate how consistently multiple raters score the same learners. This is essential for ensuring fairness and reliability in assessments.
What the Report Measures
At the core of the report is the Skill Area column, where each entry includes an "n=" value. This number represents how many evaluations a user has completed. A higher "n" (e.g., 200) indicates a more statistically reliable consistency score than a lower one (e.g., 2).
Accessing and Filtering the Report
Page One
Page Two
To access the report:
- Navigate to Reports in the global navigation bar.
- Select Inter-rater Consistency.
You can filter the report by:
- Date range: Last 30 days, Last 90 days, Year to Date, or All Time
- Organization
- Course
- Scenario
- Evaluation
Once filtered, the report can be exported to Excel via Export Data > Inter-rater Consistency.
Note: Exported usernames are formatted as Last Name, First Name, Middle Name (if applicable).
Included vs Excluded Evaluations
When filtering by Organization, Course, or Scenario, the report includes:
- Patient Evaluations
- Monitor Evaluations
- Scoring Evaluations
It excludes:
- Participant Evaluations
- Course Evaluations (as these are self-assessments)
What's in the Excel Export?
The exported report includes:
- Question categories used in the filter
- Standardized patient graders
- Z-scores per grader by question category
- Mean and standard deviation per category
- Overall Z-score and average score per grader
Color Coding for Z-scores:
In the SimCapture UI, these are shown in light and dark orange.
The Excel file contains two pages:
- Page 1: Report data
- Page 2: Applied filters
Z-Scores
A Z-Score represents how far a specific value is from the mean of a dataset, measured in standard deviations. In this report, Z-Scores are color-coded to visually indicate how much a score deviates from the expected average. This makes it easier to compare values—even across different distributions.
Z-Score color codes and their meanings
Each color highlights the degree of deviation from the mean:
- Dark Red: Indicates a Z-score of -2 (2 standard deviations below the mean)
- Light Red: Indicates a Z-score of -1 (1 standard deviation below the mean)
- Dark Green: Indicates a Z-score of 2 (2 standard deviations above the mean)
- Light Green: Indicates a Z-score of 1 (1 standard deviation above the mean)
Z-Score calculations
The Z-score is calculated in two ways when using this report and helps identify how an evaluator's performance compares to the broader group, either within specific question categories or across all evaluations.
- Evaluator and Question Categories: The Z-score is calculated as follows: [ \text{Z-score} = \frac{(\text{evaluator mean per QC}) - (\text{Mean of the QC})}{\text{Standard Deviation of the QC}} ]
- Evaluator and Overall Categories: The Z-score is calculated as follows: [ \text{Z-score} = \frac{(\text{evaluator mean across all their evals}) - (\text{Mean of ALL evals})}{\text{Standard Deviation of ALL evals}} ]