Inter-Rater Reliability

From RCampus Wiki
Revision as of 00:46, 9 January 2024 by Admin2 (talk | contribs)
Jump to: navigation, search

What is Inter-Rater Reliability

Inter-rater reliability is the degree of agreement among independent reviewers who rate or assess the same body of work.

Rubrics and other assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not reliable evaluations. There are a number of statistical methods that can be used to calculate inter-rater reliability. Different statistics are appropriate for different types of measurement.


How to generate Inter-Rater Reliability Reports in iRubric

Available in RCampus Enterprise Edition, Inter-Rater Reliability reports can be generated for a rubric or for a multi-rater assessment.

IRR for a Rubric

  1. Open a rubric in the gallery for viewing
  2. Click on Inter-Rater Reliability button on top of the screen
  3. Follow the screens to generate the Inter-Rater Reliability report

IRR for a Multi-Rater Assessment

  1. Open a list of assessments (menu: rubrics > assessments)
  2. Click on the number under the Completed column for a given assessment.
    • If you do not see the number, you do not have access to see this report.
  3. After openning the report, click on the Inter-Rater Reliability button on top.

Filtering data

Using the filter icon on top of the IRR screen, you can limit the data that is used to generate IRR. For example, if you want to see the report for a given date range, you can specify the dates.


How to Read IRR Reports

IRR Reports display the results in both chart and numeric mode. Please read the online description for each type of IRR.


Downloading Data

You can download input data that generated IRR on top of the screen. You can also download the IRR results by clicking on the Excel or Word icons on top of the data table.