Difference between revisions of "Co-Evaluation"

From RCampus Wiki
Jump to: navigation, search
m
 
Line 1: Line 1:
= Co-Evaluation Mode =
+
'''Co-Evaluation Mode''' enables multiple evaluators to collaboratively complete a single rubric evaluation, where each evaluator selects and scores the criteria they wish to assess. It is designed to support flexible, distributed evaluation by multiple raters.
  
'''Co-Evaluation Mode''' enables multiple evaluators to collaboratively complete a single rubric evaluation, where each evaluator selects and scores the criteria they wish to assess. It is designed to support flexible, distributed evaluation by multiple raters.
 
  
 
== Key Concepts ==
 
== Key Concepts ==
Line 8: Line 7:
 
* The rubric is shared in real-time, with saved entries visible to others depending on settings.
 
* The rubric is shared in real-time, with saved entries visible to others depending on settings.
 
* Co-Evaluation can be conducted synchronously or asynchronously.
 
* Co-Evaluation can be conducted synchronously or asynchronously.
 +
  
 
== Workflow ==
 
== Workflow ==
Line 14: Line 14:
 
# As evaluators save their entries, their input updates the cumulative rubric.
 
# As evaluators save their entries, their input updates the cumulative rubric.
 
# Once all criteria are completed, any evaluator can save the rubric as the final evaluation.
 
# Once all criteria are completed, any evaluator can save the rubric as the final evaluation.
 +
  
 
== Visibility and Anonymity ==
 
== Visibility and Anonymity ==
 
* All evaluators can view the full rubric at all times.
 
* All evaluators can view the full rubric at all times.
 
* Evaluator input (scores and feedback) becomes visible to others upon saving a criterion.
 
* Evaluator input (scores and feedback) becomes visible to others upon saving a criterion.
 +
  
 
== Feedback Support ==
 
== Feedback Support ==
Line 24: Line 26:
 
** Overall feedback for the entire rubric.
 
** Overall feedback for the entire rubric.
 
* Feedback may include plain text or rich-media (e.g., images, video, audio).
 
* Feedback may include plain text or rich-media (e.g., images, video, audio).
 +
  
 
== Finalization ==
 
== Finalization ==
 
* The rubric is considered near-complete when all criteria have been evaluated.
 
* The rubric is considered near-complete when all criteria have been evaluated.
 
* Any evaluator can trigger the final save, saving the completed evaluation.
 
* Any evaluator can trigger the final save, saving the completed evaluation.
 +
  
 
== Reporting and Audit ==
 
== Reporting and Audit ==
Line 33: Line 37:
 
* Finalized evaluations include a record of who completed each criterion.
 
* Finalized evaluations include a record of who completed each criterion.
 
* Anonymity settings are respected in all reports and displays.
 
* Anonymity settings are respected in all reports and displays.
 +
  
 
== FAQ ==
 
== FAQ ==
 
 
'''Q: Do evaluators have to wait for others to finish?'''   
 
'''Q: Do evaluators have to wait for others to finish?'''   
 
*A: No. Co-Evaluation can be performed either synchronously or asynchronously. Evaluators may work at the same time or at different times. Saved inputs become visible based on system refresh or user actions.
 
*A: No. Co-Evaluation can be performed either synchronously or asynchronously. Evaluators may work at the same time or at different times. Saved inputs become visible based on system refresh or user actions.
Line 80: Line 84:
 
'''Q: Can facilitators override evaluator input?'''   
 
'''Q: Can facilitators override evaluator input?'''   
 
*A: If configured, facilitators may unlock, override, or reassign specific criteria before or after finalization.
 
*A: If configured, facilitators may unlock, override, or reassign specific criteria before or after finalization.
 +
 +
 +
== See also ==
 +
{{rubric see also}}
 +
 +
 +
 +
[[Category:Rubrics]]
 +
[[Category:Assessments]]

Latest revision as of 15:43, 9 July 2025

Co-Evaluation Mode enables multiple evaluators to collaboratively complete a single rubric evaluation, where each evaluator selects and scores the criteria they wish to assess. It is designed to support flexible, distributed evaluation by multiple raters.


Key Concepts

  • A single rubric is evaluated collectively by multiple evaluators.
  • Each evaluator selects and completes any criteria they are qualified to assess.
  • The rubric is shared in real-time, with saved entries visible to others depending on settings.
  • Co-Evaluation can be conducted synchronously or asynchronously.


Workflow

  1. Evaluators access the shared rubric and evaluate different criteria.
  2. For each criterion, evaluators enter a score and optional feedback (text and/or rich-media).
  3. As evaluators save their entries, their input updates the cumulative rubric.
  4. Once all criteria are completed, any evaluator can save the rubric as the final evaluation.


Visibility and Anonymity

  • All evaluators can view the full rubric at all times.
  • Evaluator input (scores and feedback) becomes visible to others upon saving a criterion.


Feedback Support

  • Evaluators can provide:
    • Feedback for each individual criterion.
    • Overall feedback for the entire rubric.
  • Feedback may include plain text or rich-media (e.g., images, video, audio).


Finalization

  • The rubric is considered near-complete when all criteria have been evaluated.
  • Any evaluator can trigger the final save, saving the completed evaluation.


Reporting and Audit

  • All evaluator inputs are tracked with timestamps.
  • Finalized evaluations include a record of who completed each criterion.
  • Anonymity settings are respected in all reports and displays.


FAQ

Q: Do evaluators have to wait for others to finish?

  • A: No. Co-Evaluation can be performed either synchronously or asynchronously. Evaluators may work at the same time or at different times. Saved inputs become visible based on system refresh or user actions.

Q: Can two evaluators score the same criterion?

  • A: No. Once a criterion is scored and saved by one evaluator, it becomes locked to others.

Q: Can evaluators change their own input after saving?

  • A: Yes, evaluators can revise their own input until the rubric is finalized. After finalization, editability depends on the system’s configuration.

Q: What if a criterion is left blank?

  • A: The rubric cannot be finalized until all criteria are scored. Incomplete rubrics remain in draft mode.

Q: Who can finalize the rubric?

  • A: Any evaluator can finalize the rubric once all rows are completed. Finalization compiles all scores and feedback into a single evaluation record.

Q: Can evaluators see each other’s feedback and scores?

  • A: Yes, once a criterion is saved, its input becomes visible to other evaluators. The visibility of evaluator names depends on anonymity settings.

Q: What does anonymity control?

  • A: Anonymity settings determine whether evaluator names are shown to each other. When enabled, evaluators remain anonymous to each other but are still identifiable to facilitators.

Q: Do evaluators see the full rubric?

  • A: Yes. All evaluators always see the full rubric layout, including criteria, descriptions, and scoring options.

Q: Who creates the rubric?

  • A: Rubrics are typically created by institutional leads, program coordinators, or designated assessment designers—not by the evaluators themselves.

Q: Can the rubric be used without student submissions?

  • A: Yes. Co-Evaluation Mode supports evaluations linked to topics, documents, or matrix cells, and does not require a student artifact.

Q: Can evaluators provide feedback?

  • A: Yes. Feedback can be entered per criterion as well as overall. Plain text and rich-media (e.g., audio, images, video) are supported.

Q: Are timestamps and attribution tracked?

  • A: Yes. The system records who entered what, and when, including timestamps for transparency and reporting.

Q: What happens after finalization?

  • A: The finalized rubric is stored as a completed evaluation. Depending on configuration, it may become read-only or remain editable under specific conditions.

Q: Can Co-Evaluation be used for accreditation or compliance reporting?

  • A: Yes. Co-Evaluation Mode is suitable for internal assessments, program reviews, accreditation evidence collection, and more.

Q: Can facilitators override evaluator input?

  • A: If configured, facilitators may unlock, override, or reassign specific criteria before or after finalization.


See also

Learn More About Rubrics

Working with rubrics

iRubric Public Tools