Rubric Assessment Modes

From RCampus Wiki
Revision as of 17:28, 7 April 2026 by Admin2 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated.


Quick Comparison

Mode Who rates Sees prior ratings? How final score is set
Single/Sequential One or more raters Yes — shared view The active assessment is the final score
Multi-Rater/Juried Multiple raters independently No — blind to each other Scores are averaged after all raters submit
Collaborative Multiple raters, each owns sections Only their own sections Sections are combined into one rubric
Moderated (iRubric Rapid only) Multiple raters + a designated moderator Moderator sees all; raters are blind Moderator issues the final score


Single/Sequential Assessment

In Single/Sequential mode, there is one assessment per evaluatee. A single rater can complete it entirely, or multiple raters can modify the work of the last one sequentially.

Typical use Solo rater or shared rating within a team
Final score Whatever is saved in the rubric at the time of publishing
Self-assessment Available as a separate tab
Combine step required? No

Example: A course has two co-instructors. Either one can open an evaluatee's rubric and score it. If both make changes, the last saved version is what the evaluatee sees.


Multi-Rater/Juried Assessment

In Multi-Rater/Juried mode, each rater scores evaluatees independently.

Typical use Jury-style assessment, inter-rater reliability studies, large assignments with a shared rating load
Final score Average of all submitted rater assessments
Self-assessment Available as a separate tab; not included in the average
Combine step required? Yes — must click "Calculate final grades"

Example: Five raters share a cohort of 30 evaluatees. Each rater scores roughly 6 evaluatees, though some may be rated by 2–3 raters for reliability. When the team is satisfied, any rater clicks "Calculate final grades" to average all ratings and prepare them for publishing.


Collaborative Assessment

In collaborative mode, each rater scores a different set of rubric criteria, based on their expertise.

Typical use Interdisciplinary assessments where different raters own different criteria
Final score All sections combined into one complete rubric score
Self-assessment Available as a separate tab
Combine step required? Yes — sections are merged after all raters complete their sections

Example: A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria.


Moderated Assessment

Note: Moderated assessment is available exclusively in iRubric Rapid.

Moderated assessment combines the objectivity of independent multi-rater scoring with a human oversight step. Raters score evaluatees independently without seeing each other's work. A designated moderator then reviews all ratings side by side and issues the official final score — which may differ from the average if the moderator judges it appropriate.

Typical use High-stakes assessments, accreditation reviews, situations where human judgment must override averaging
Final score Set by the moderator after reviewing all ratings
Self-assessment Available; moderator can view as reference
Combine step required? Yes — moderator submits the final assessment
Availability iRubric Rapid only

Example: Three assessors each score a thesis defense independently. The department chair, acting as moderator, reviews all three assessments and issues the official score — taking into account the ratings but applying their own judgment where raters disagree significantly.

Difference from Multi-Rater/Juried

Both moderated and Multi-Rater/Juried modes involve multiple raters scoring independently. The key difference is how the final score is determined:

Multi-Rater/Juried Moderated
Final score set by Automatic average Moderator's judgment
Human review of disagreements Optional (flagged for review) Required (moderator decides)
Can override the average? No Yes
Best for Shared rating load, reliability checks High-stakes, accreditation, appeals
Availability All plans iRubric Rapid only


How to Choose a Mode

If you are unsure which mode to use, these questions can help narrow it down.

Is only one rater responsible for scoring?

Use Single/Sequential. Even if multiple raters have access, Single/Sequential mode keeps things simple with one rubric per evaluatee.

Do multiple raters each need to score the same evaluatee independently?

Use Multi-Rater/Juried if you want the final score to be an automatic average, or Moderated (iRubric Rapid only) if you need a designated person to review all ratings and issue the score manually.

Do different raters own different parts of the rubric?

Use Collaborative. Each rater scores only their assigned criteria, and the sections are combined into one complete rubric.

Is this a high-stakes assessment where averaging is not sufficient?

Use Moderated (iRubric Rapid only). A moderator can review all ratings and exercise judgment before the score is finalized — particularly useful for thesis reviews, accreditation portfolios, or any situation where institutional accountability matters.

Note: You can change the assessment mode of an assignment before any scoring has begun. Once any rater has submitted a rating, the mode is locked.


See Also