Difference between revisions of "Rubric Assessment Modes"
(Created page with "= Assessment Modes = RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters...") |
m |
||
| Line 1: | Line 1: | ||
| − | |||
| − | |||
RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated. | RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated. | ||
| Line 21: | Line 19: | ||
|} | |} | ||
| − | + | ||
== Single/Sequential Assessment == | == Single/Sequential Assessment == | ||
| Line 45: | Line 43: | ||
# '''Save and publish''' — Save the rubric. When all evaluatees are scored, publish results to make them visible. | # '''Save and publish''' — Save the rubric. When all evaluatees are scored, publish results to make them visible. | ||
| − | + | ||
== Multi-Rater/Juried Assessment == | == Multi-Rater/Juried Assessment == | ||
| Line 78: | Line 76: | ||
After the calculate step, you can view each rater's individual assessment by selecting their name in the rater tabs at the top of the rubric pane. During the active assessment phase (before calculating), raters can only see their own ratings. | After the calculate step, you can view each rater's individual assessment by selecting their name in the rater tabs at the top of the rubric pane. During the active assessment phase (before calculating), raters can only see their own ratings. | ||
| − | + | ||
== Collaborative Assessment == | == Collaborative Assessment == | ||
| Line 96: | Line 94: | ||
'''Example:''' A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria. | '''Example:''' A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria. | ||
| − | + | ||
== Moderated Assessment == | == Moderated Assessment == | ||
| Line 141: | Line 139: | ||
|} | |} | ||
| − | + | ||
== How to Choose a Mode == | == How to Choose a Mode == | ||
| Line 164: | Line 162: | ||
{{Note|You can change the assessment mode of an assignment before any scoring has begun. Once any rater has submitted a rating, the mode is locked.}} | {{Note|You can change the assessment mode of an assignment before any scoring has begun. Once any rater has submitted a rating, the mode is locked.}} | ||
| − | |||
| − | |||
Revision as of 20:52, 6 April 2026
RCampus supports four assessment modes, each designed for a different workflow. Choose a mode when setting up an assignment — it determines how raters interact with the rubric and how final scores are calculated.
Applies to: Rubrics, Assignments
Contents
Quick Comparison
| Mode | Who rates | Sees prior ratings? | How final score is set |
|---|---|---|---|
| Single/Sequential | One or more raters | Yes — shared view | The active assessment is the final score |
| Multi-Rater/Juried | Multiple raters independently | No — blind to each other | Scores are averaged after all raters submit |
| Collaborative | Multiple raters, each owns sections | Only their own sections | Sections are combined into one rubric |
| Moderated (iRubric Rapid only) | Multiple raters + a designated moderator | Moderator sees all; raters are blind | Moderator issues the final score |
Single/Sequential Assessment
In Single/Sequential mode, there is one rubric per evaluatee that any authorized rater can open and score. If multiple raters are involved, they share access to the same assessment — one rater may start it and another may finish it. There is no averaging or combining step; the current state of the rubric is the final score.
| Typical use | Solo rater or shared rating within a team |
|---|---|
| Final score | Whatever is saved in the rubric at the time of publishing |
| Self-assessment | Available as a separate tab |
| Combine step required? | No |
Example: A course has two co-instructors. Either one can open an evaluatee's rubric and score it. If both make changes, the last saved version is what the evaluatee sees.
How It Works
- Open the rubric for an evaluatee — Select the evaluatee from the list on the Score Rubrics page. The rubric opens in the right pane.
- Score each criterion — Click the appropriate level for each rubric criterion. Add comments where needed.
- Save and publish — Save the rubric. When all evaluatees are scored, publish results to make them visible.
Multi-Rater/Juried Assessment
In Multi-Rater/Juried mode, each rater scores evaluatees independently without seeing other raters' ratings. Once the team is satisfied with the coverage — there is no required minimum — any rater can trigger the final score calculation, which averages all submitted assessments per evaluatee.
| Typical use | Jury-style assessment, inter-rater reliability studies, large assignments with a shared rating load |
|---|---|
| Final score | Average of all submitted rater assessments |
| Self-assessment | Available as a separate tab; not included in the average |
| Combine step required? | Yes — must click "Calculate final grades" |
Example: Five raters share a cohort of 30 evaluatees. Each rater scores roughly 6 evaluatees, though some may be rated by 2–3 raters for reliability. When the team is satisfied, any rater clicks "Calculate final grades" to average all ratings and prepare them for publishing.
How It Works
- Each rater scores their assigned evaluatees — Raters open individual evaluatee rubrics and score independently. They cannot see other raters' ratings during this phase.
- Review coverage across the cohort — The evaluatee list shows how many assessments each evaluatee has received. The team decides when coverage is sufficient.
- Calculate final grades — Any rater clicks "Calculate final grades" in the assignment toolbar. This averages all submitted assessments per evaluatee and locks the scores.
- Review and publish — Switch to the "Multi-rater results" tab to review each evaluatee's combined score and flag any criteria with high disagreement between raters before publishing.
Viewing Other Raters' Assessments
After the calculate step, you can view each rater's individual assessment by selecting their name in the rater tabs at the top of the rubric pane. During the active assessment phase (before calculating), raters can only see their own ratings.
Collaborative Assessment
In collaborative mode, the rubric is divided into sections and each rater is responsible for scoring their assigned sections only. No single rater scores the complete rubric — the full picture is assembled from all raters' contributions. This is useful when different raters have expertise in different areas of the rubric.
| Typical use | Interdisciplinary assessments where different raters own different criteria |
|---|---|
| Final score | All sections combined into one complete rubric score |
| Self-assessment | Available as a separate tab |
| Combine step required? | Yes — sections are merged after all raters complete their sections |
Example: A capstone project rubric has six criteria. The writing assessor scores the three communication criteria; the subject-matter assessor scores the three technical criteria. The final score reflects all six criteria.
Moderated Assessment
Moderated assessment combines the objectivity of independent multi-rater scoring with a human oversight step. Raters score evaluatees independently without seeing each other's work. A designated moderator then reviews all ratings side by side and issues the official final score — which may differ from the average if the moderator judges it appropriate.
| Typical use | High-stakes assessments, accreditation reviews, situations where human judgment must override averaging |
|---|---|
| Final score | Set by the moderator after reviewing all ratings |
| Self-assessment | Available; moderator can view as reference |
| Combine step required? | Yes — moderator submits the final assessment |
| Availability | iRubric Rapid only |
Example: Three assessors each score a thesis defense independently. The department chair, acting as moderator, reviews all three assessments and issues the official score — taking into account the ratings but applying their own judgment where raters disagree significantly.
Difference from Multi-Rater/Juried
Both moderated and Multi-Rater/Juried modes involve multiple raters scoring independently. The key difference is how the final score is determined:
| Multi-Rater/Juried | Moderated | |
|---|---|---|
| Final score set by | Automatic average | Moderator's judgment |
| Human review of disagreements | Optional (flagged for review) | Required (moderator decides) |
| Can override the average? | No | Yes |
| Best for | Shared rating load, reliability checks | High-stakes, accreditation, appeals |
| Availability | All plans | iRubric Rapid only |
How to Choose a Mode
If you are unsure which mode to use, these questions can help narrow it down.
Is only one rater responsible for scoring?
Use Single/Sequential. Even if multiple raters have access, Single/Sequential mode keeps things simple with one rubric per evaluatee.
Do multiple raters each need to score the same evaluatee independently?
Use Multi-Rater/Juried if you want the final score to be an automatic average, or Moderated (iRubric Rapid only) if you need a designated person to review all ratings and issue the score manually.
Do different raters own different parts of the rubric?
Use Collaborative. Each rater scores only their assigned criteria, and the sections are combined into one complete rubric.
Is this a high-stakes assessment where averaging is not sufficient?
Use Moderated (iRubric Rapid only). A moderator can review all ratings and exercise judgment before the score is finalized — particularly useful for thesis reviews, accreditation portfolios, or any situation where institutional accountability matters.