Difference between revisions of "Rubric Assessment Modes"
(Created page with "= Assessment Modes = RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters...") |
m |
||
| (11 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
| − | |||
| − | |||
RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated. | RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated. | ||
| − | |||
__TOC__ | __TOC__ | ||
| Line 21: | Line 18: | ||
|} | |} | ||
| − | + | ||
== Single/Sequential Assessment == | == Single/Sequential Assessment == | ||
| − | In Single/Sequential mode, there is one | + | In Single/Sequential mode, there is one assessment per evaluatee. A single rater can complete it entirely, or multiple raters can modify the work of the last one sequentially. |
{| class="wikitable" | {| class="wikitable" | ||
| Line 39: | Line 36: | ||
'''Example:''' A course has two co-instructors. Either one can open an evaluatee's rubric and score it. If both make changes, the last saved version is what the evaluatee sees. | '''Example:''' A course has two co-instructors. Either one can open an evaluatee's rubric and score it. If both make changes, the last saved version is what the evaluatee sees. | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
== Multi-Rater/Juried Assessment == | == Multi-Rater/Juried Assessment == | ||
| − | In Multi-Rater/Juried mode, each rater scores evaluatees independently | + | In Multi-Rater/Juried mode, each rater scores evaluatees independently. |
{| class="wikitable" | {| class="wikitable" | ||
| Line 63: | Line 53: | ||
'''Example:''' Five raters share a cohort of 30 evaluatees. Each rater scores roughly 6 evaluatees, though some may be rated by 2–3 raters for reliability. When the team is satisfied, any rater clicks "Calculate final grades" to average all ratings and prepare them for publishing. | '''Example:''' Five raters share a cohort of 30 evaluatees. Each rater scores roughly 6 evaluatees, though some may be rated by 2–3 raters for reliability. When the team is satisfied, any rater clicks "Calculate final grades" to average all ratings and prepare them for publishing. | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
== Collaborative Assessment == | == Collaborative Assessment == | ||
| − | In collaborative mode, | + | In collaborative mode, each rater scores a different set of rubric criteria, based on their expertise. |
{| class="wikitable" | {| class="wikitable" | ||
| Line 96: | Line 70: | ||
'''Example:''' A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria. | '''Example:''' A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria. | ||
| − | |||
== Moderated Assessment == | == Moderated Assessment == | ||
| Line 141: | Line 114: | ||
|} | |} | ||
| − | + | ||
== How to Choose a Mode == | == How to Choose a Mode == | ||
| Line 150: | Line 123: | ||
Use '''Single/Sequential'''. Even if multiple raters have access, Single/Sequential mode keeps things simple with one rubric per evaluatee. | Use '''Single/Sequential'''. Even if multiple raters have access, Single/Sequential mode keeps things simple with one rubric per evaluatee. | ||
| + | |||
| + | * Available in [[Gradebook]] | ||
=== Do multiple raters each need to score the same evaluatee independently? === | === Do multiple raters each need to score the same evaluatee independently? === | ||
Use '''Multi-Rater/Juried''' if you want the final score to be an automatic average, or '''Moderated''' ([[iRubric Rapid]] only) if you need a designated person to review all ratings and issue the score manually. | Use '''Multi-Rater/Juried''' if you want the final score to be an automatic average, or '''Moderated''' ([[iRubric Rapid]] only) if you need a designated person to review all ratings and issue the score manually. | ||
| + | |||
| + | * Available in [[iRubric Rapid]], [[Matrices|RCampus Matrix]] | ||
=== Do different raters own different parts of the rubric? === | === Do different raters own different parts of the rubric? === | ||
Use '''Collaborative'''. Each rater scores only their assigned criteria, and the sections are combined into one complete rubric. | Use '''Collaborative'''. Each rater scores only their assigned criteria, and the sections are combined into one complete rubric. | ||
| + | |||
| + | * Available in [[Matrices|RCampus Matrix]] | ||
=== Is this a high-stakes assessment where averaging is not sufficient? === | === Is this a high-stakes assessment where averaging is not sufficient? === | ||
Use '''Moderated''' ([[iRubric Rapid]] only). A moderator can review all ratings and exercise judgment before the score is finalized — particularly useful for thesis reviews, accreditation portfolios, or any situation where institutional accountability matters. | Use '''Moderated''' ([[iRubric Rapid]] only). A moderator can review all ratings and exercise judgment before the score is finalized — particularly useful for thesis reviews, accreditation portfolios, or any situation where institutional accountability matters. | ||
| + | |||
| + | * Available in [[iRubric Rapid]] | ||
{{Note|You can change the assessment mode of an assignment before any scoring has begun. Once any rater has submitted a rating, the mode is locked.}} | {{Note|You can change the assessment mode of an assignment before any scoring has begun. Once any rater has submitted a rating, the mode is locked.}} | ||
| − | + | ||
| + | == See Also == | ||
| + | *[[iRubric]] | ||
| + | *[[Program Assessment]] | ||
| + | *[[Rubric assessment|Rubric Assessment]] | ||
| + | *[[Matrix Assessment]] | ||
| + | *[[Program and Unit Planning & Assessment]] | ||
| + | *[[Collaborative assessment|Collaborative Assessment]] | ||
| + | *[[Rubric Assessment Modes]] | ||
| + | |||
| + | |||
| + | |||
| + | |||
| + | [[Category:Assessments]] | ||
| + | [[Category:Rubrics]] | ||
Latest revision as of 17:28, 7 April 2026
RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated.
Contents
Quick Comparison
| Mode | Who rates | Sees prior ratings? | How final score is set |
|---|---|---|---|
| Single/Sequential | One or more raters | Yes — shared view | The active assessment is the final score |
| Multi-Rater/Juried | Multiple raters independently | No — blind to each other | Scores are averaged after all raters submit |
| Collaborative | Multiple raters, each owns sections | Only their own sections | Sections are combined into one rubric |
| Moderated (iRubric Rapid only) | Multiple raters + a designated moderator | Moderator sees all; raters are blind | Moderator issues the final score |
Single/Sequential Assessment
In Single/Sequential mode, there is one assessment per evaluatee. A single rater can complete it entirely, or multiple raters can modify the work of the last one sequentially.
| Typical use | Solo rater or shared rating within a team |
|---|---|
| Final score | Whatever is saved in the rubric at the time of publishing |
| Self-assessment | Available as a separate tab |
| Combine step required? | No |
Example: A course has two co-instructors. Either one can open an evaluatee's rubric and score it. If both make changes, the last saved version is what the evaluatee sees.
Multi-Rater/Juried Assessment
In Multi-Rater/Juried mode, each rater scores evaluatees independently.
| Typical use | Jury-style assessment, inter-rater reliability studies, large assignments with a shared rating load |
|---|---|
| Final score | Average of all submitted rater assessments |
| Self-assessment | Available as a separate tab; not included in the average |
| Combine step required? | Yes — must click "Calculate final grades" |
Example: Five raters share a cohort of 30 evaluatees. Each rater scores roughly 6 evaluatees, though some may be rated by 2–3 raters for reliability. When the team is satisfied, any rater clicks "Calculate final grades" to average all ratings and prepare them for publishing.
Collaborative Assessment
In collaborative mode, each rater scores a different set of rubric criteria, based on their expertise.
| Typical use | Interdisciplinary assessments where different raters own different criteria |
|---|---|
| Final score | All sections combined into one complete rubric score |
| Self-assessment | Available as a separate tab |
| Combine step required? | Yes — sections are merged after all raters complete their sections |
Example: A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria.
Moderated Assessment
Moderated assessment combines the objectivity of independent multi-rater scoring with a human oversight step. Raters score evaluatees independently without seeing each other's work. A designated moderator then reviews all ratings side by side and issues the official final score — which may differ from the average if the moderator judges it appropriate.
| Typical use | High-stakes assessments, accreditation reviews, situations where human judgment must override averaging |
|---|---|
| Final score | Set by the moderator after reviewing all ratings |
| Self-assessment | Available; moderator can view as reference |
| Combine step required? | Yes — moderator submits the final assessment |
| Availability | iRubric Rapid only |
Example: Three assessors each score a thesis defense independently. The department chair, acting as moderator, reviews all three assessments and issues the official score — taking into account the ratings but applying their own judgment where raters disagree significantly.
Difference from Multi-Rater/Juried
Both moderated and Multi-Rater/Juried modes involve multiple raters scoring independently. The key difference is how the final score is determined:
| Multi-Rater/Juried | Moderated | |
|---|---|---|
| Final score set by | Automatic average | Moderator's judgment |
| Human review of disagreements | Optional (flagged for review) | Required (moderator decides) |
| Can override the average? | No | Yes |
| Best for | Shared rating load, reliability checks | High-stakes, accreditation, appeals |
| Availability | All plans | iRubric Rapid only |
How to Choose a Mode
If you are unsure which mode to use, these questions can help narrow it down.
Is only one rater responsible for scoring?
Use Single/Sequential. Even if multiple raters have access, Single/Sequential mode keeps things simple with one rubric per evaluatee.
- Available in Gradebook
Do multiple raters each need to score the same evaluatee independently?
Use Multi-Rater/Juried if you want the final score to be an automatic average, or Moderated (iRubric Rapid only) if you need a designated person to review all ratings and issue the score manually.
- Available in iRubric Rapid, RCampus Matrix
Do different raters own different parts of the rubric?
Use Collaborative. Each rater scores only their assigned criteria, and the sections are combined into one complete rubric.
- Available in RCampus Matrix
Is this a high-stakes assessment where averaging is not sufficient?
Use Moderated (iRubric Rapid only). A moderator can review all ratings and exercise judgment before the score is finalized — particularly useful for thesis reviews, accreditation portfolios, or any situation where institutional accountability matters.
- Available in iRubric Rapid