Difference between revisions of "Inter-Rater Reliability"

From RCampus Wiki
Jump to: navigation, search
Line 2: Line 2:
 
Inter-rater reliability is the degree of agreement among independent reviewers who rate or assess the same body of work.
 
Inter-rater reliability is the degree of agreement among independent reviewers who rate or assess the same body of work.
  
Rubrics and other assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not reliable evaluations. There are a number of statistical methods that can be used to calculate inter-rater reliability. Different statistics are appropriate for different types of measurement.  
+
Rubrics and other assessment tools that rely on ratings must exhibit good [https://info.rcampus.com/blog/2022/03/04/rubric-reliability inter-rater reliability], otherwise they are not reliable evaluations. There are a number of statistical methods that can be used to calculate inter-rater reliability. Different statistics are appropriate for different types of measurement.  
 +
 
  
 
== How to generate Inter-Rater Reliability Reports in iRubric ==
 
== How to generate Inter-Rater Reliability Reports in iRubric ==
 
+
[[iRubric]] Enterprise Edition has built-in functionality for Inter-Rater Reliability reports.  To generate this report:
[[iRubric]] Enterprise Edition has built-in functionality for Inter-Reliability reports.  To generate this report:
 
  
 
# Open a rubric in the gallery for viewing
 
# Open a rubric in the gallery for viewing
 
# Click on Inter-Rater Reliability button on top of the screen
 
# Click on Inter-Rater Reliability button on top of the screen
 
# Follow the screens to generate the Inter-Rater Reliability report
 
# Follow the screens to generate the Inter-Rater Reliability report
 +
  
 
[[Category:Rubrics]]
 
[[Category:Rubrics]]
 
[[Category:Reports]]
 
[[Category:Reports]]

Revision as of 17:39, 10 March 2022

What is Inter-Rater Reliability

Inter-rater reliability is the degree of agreement among independent reviewers who rate or assess the same body of work.

Rubrics and other assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not reliable evaluations. There are a number of statistical methods that can be used to calculate inter-rater reliability. Different statistics are appropriate for different types of measurement.


How to generate Inter-Rater Reliability Reports in iRubric

iRubric Enterprise Edition has built-in functionality for Inter-Rater Reliability reports. To generate this report:

  1. Open a rubric in the gallery for viewing
  2. Click on Inter-Rater Reliability button on top of the screen
  3. Follow the screens to generate the Inter-Rater Reliability report