Understanding the Inter-Rater Consistency Report
Helping you interpret and use Inter-rater consistency insights effectively
Table of Contents
- SimCapture Cloud Release Notes
- SimCapture Cloud Administrator and Faculty Help
- SimCapture Cloud Learner Help
- SimCapture On-Premise Help
- SimCapture for Skills Help
- SimCapture Mobile Camera App
- SimCapture Companion Apps Help
- SimCapture Integrations Help
- Samaritan AI Help
- SimCapture Cloud Use Cases
- Hardware and Network Technical Information Help
- Glossary
- New articles that need review
The Inter‑Rater Consistency Report helps administrative users evaluate how consistently multiple evaluators score the same scenarios, evaluations, or courses. It is designed to support quality assurance, fair assessment practices, and targeted evaluator training by highlighting agreement patterns and scoring variability across raters.
Before You Get Started
- This report is only available to administrative‑level users .
- It analyzes subjective ratings provided by two or more evaluators.
- Results are aggregated and can be reviewed or exported at the following levels:
- Organizations
- Courses
- Scenarios
- Evaluations
For detailed interpretation guidance and methodology, see the full Inter‑Rater Consistency Report documentation.
What the Report Shows
The report provides a visual and data‑driven view of evaluator agreement, helping you:
- Identify inconsistent scoring patterns across raters
- Detect potential rating bias or outliers
- Compare evaluator alignment across multiple courses or scenarios
- Support calibration and training efforts for evaluators
- Monitor scoring consistency as part of ongoing program quality assurance
Exporting Report Data
You can export report data for deeper analysis, sharing, or record‑keeping.
Available export options typically include:
- CSV
- Excel
Exports can be filtered by organization, course, scenario, or evaluation to focus on specific programs or timeframes.