Reliability and validity of scenario-specific versus generic simulation assessment rubrics

Marian Luctkar-Flude, Deborah Tregunno, Kim Sears, Cheryl Pulling, Kayla Lee, Rylan Egan

Abstract


Background: This study assessed reliability and validity of scenario-specific and generic simulation assessment rubrics used in two different deteriorating patient simulations, and explored learner and instructor preferences.

Methods: Learner performance was rated independently by three instructors using two rubrics.

Results: A convenience sample of 29 nursing students was recruited.  Inter-rater reliability was similar but slightly higher for the generic rubric than the scenario-specific learning outcomes assessment rubric (ICC = .759 vs .748 and IRR = .693 vs .641) for two different scenarios. Most students found the scenario-specific rubric more helpful to their learning (59%), and easier to use (52%). Instructors (3/3) found the scenario-specific rubric more helpful to guide debriefing.

Conclusions: Scenario-specific rubrics may be more valuable for learners to help them identify their own knowledge and performance gaps and assist them in their preparation for simulation. Additionally, scenario-specific rubrics provide direction for both learners and instructors during debriefing sessions.


Full Text:

PDF


DOI: https://doi.org/10.5430/jnep.v10n8p74

Journal of Nursing Education and Practice

ISSN 1925-4040 (Print)   ISSN 1925-4059 (Online)

Copyright © Sciedu Press 
To make sure that you can receive messages from us, please add the 'Sciedupress.com' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders.