Oral Presentations - Assessment

Moderated by Jaya Yodh
Session Coordinator: Edward Klatt

Presentation 1 - Inter-Examiner Variability in the 2nd-year, Objective Structured Clinical Examination of the MD Program at Ross University School of Medicine, Barbados
Rakesh Calton    
Ross University School of Medicine

Interexaminer variability in an Objective Structured Clinical Examination (OSCE) is well described 1, 2. We, therefore, undertook this study to probe inter-examiner variability in two cohorts of the 2nd year summative OSCE, at Ross University School of Medicine (RUSM), Barbados. 

The aim of this study was to review the OSCE process and conduct a statistical analysis of the outcome and performance data to determine inter-examiner variability. Various factors responsible for this variability are determined, analyzed, and discussed. The Statistical analysis used a MANOVA model that determines whether there are statistically significant differences among levels of independent variables on multiple dependent variables3. Wilks’ lambda (?) and Tukey’s multicomparison were used as tests of significance.

There was a significant examiner effect for each of the two cohorts of the OSCE. For the May 2019 cohort (13 examiners), calculated Wilks’ lambda (?) = .00154457 (F = 3.12, P < .001 for alpha = 0.5), while for the September 2019 cohort (20 examiners), calculated Wilks’ lambda (?) = 0.0006194 (F = 2.83, P < .001 for alpha = 0.5). Cronbach’s alpha was calculated as a measure of internal consistency and scale reliability4.  For the May cohort, examiners’ consistency ranged from ‘good’ (0.8 – 0.9,7.6%) to ‘unacceptable’ (<0.5, 38.46%) while for the September cohort examiners’ consistency ranged from ‘good’ (0.8 – 0.9, 20%) to ‘unacceptable’ (<0.5, 30%). None of the examiners in both cohorts had an ‘excellent’ Cronbach’s alpha (>0.9).

There is significant inter-examiner variability observed in the 2nd year OSCE cohorts at RUSM. Multiple factors contributing to this variability have been analyzed, discussed, and will be presented at the conference. The study underlines the importance of identifying various factors contributing to inter-examiner variability, which in turn serves to strengthen examiner training and standardization.

Presentation 2 - Setting a Standard Passing Score Improves Performance on Cumulative Final Exams in a Pass/Fail Preclinical Medical Curriculum
Emily Moorefield    
University of North Carolina Chapel Hill School of Medicine

Mastery of preclinical medical science content is critical for future student success. In a pass/fail curriculum students achieving high scores on assessments early in a course accumulate points and therefore need low scores on final exams to pass. This may result in diminished motivation to learn material leading up to the final and may ultimately cause gaps in medical knowledge. We set a standard passing score on cumulative final exams with the goal of requiring content mastery throughout the entire course. Students not meeting the standard score reviewed material and retook the exam to demonstrate understanding. 

We implemented a standard passing score of 70 for cumulative final exams in each medical science system-based course. Final exams were created using NBME Customized Assessment Services (CAS) to select questions tailored to course instruction. Passing required a score of ? 70 in the overall course in addition a score of ? 70 on the final exam. Students achieving a passing score in the overall course but scoring below the passing standard on the final exam were required to retake only the final exam several days later. 

Analysis of student performance on final exams in the first system-based courses in our preclinical curriculum revealed that with the passing standard set there was an increase in the average score on the final exam and there were fewer students below the 70 threshold than in prior years. The few students not meeting the passing standard retook the final exam and passed on the first retake attempt. 

Setting a standard passing score on cumulative final exams promotes effective study habits throughout the duration of the course and prevents students from disregarding content delivered late in the course. The standard score also allowed identification of students in need of additional academic support so that we could provide resources to improve success in future courses.

Presentation 3 - Cognitive Test Anxiety Among Health Professions Students

Shekitta Acker    
Mayo Clinic Alix School of Medicine

High cognitive test anxiety can lead to academic performance concerns in students. This cross-sectional study investigated the distribution and relationship between cognitive test anxiety (CTA), academic resilience (AR), and the demographics of physician assistant (PA), nurse practitioner (NP), and physical/occupational (PT/OT) students.

PA (65), NP (118), and PT/OT (26) students from seven universities across the United States were invited to participate in the study. Participants completed two validated surveys; Cognitive Test Anxiety Scale-2(CTAS-2) and Academic Resilience Scale (ARS-30) along with demographic-related questions. Responses were analyzed using one-way ANOVA, linear regression, and multiple linear regression.

Two hundred and forty-seven students from seven programs participated in the study and two hundred and nine students were included in the final analysis. Sixty-three percent of students in this study presented with moderate (41%) to high (22%) CTA. The prevalence of high CTA among PA, NP, and PT/OT students was 8%, 30%, and 19%, respectively. Non-White students had a statistically significantly higher mean CTA score than White students. There was no statistically significant difference in the CTA between students by program year, gender, or age. Forty-two percent of students presented with high AR among health professions. There was no statistically significant difference in the mean AR between students by program year, gender, race, or age. AR explained 14.2% of the variance of CTA among students. And about 11.5% variance in current self-reported GPA and 20% variance in the confidence levels of passing their licensure examination was explained by CTA and AR model.

High CTA does exist in health professions students and may significantly impact their academic performance. Non-white students present with higher rates of CTA. With matriculation and retention rates among underrepresented minorities being a challenge, identifying students early and finding interventions to improve students' academic success will be imperative.

Presentation 4 - Selection for Residency Training as a Learning Experience: A Phenomenological Study of Applicants’ Learning Through an Assessment Tool
Lara Teheux    
Radboudumc Amalia Children's Hospital

Research on residency selection tools does not explore their learning value, despite international consensus that "educational effect" is a criterium for good assessment practices. Insights into the learning value would benefit the integration of selection in the learning continuum. This study aimed to 1) explore the learning value of an assessment tool used in residency selection; and 2) understand what factors influence applicants' learning.

The online assessment is a validated tool that measures intelligence, personality, motivation and a set of core competencies. We conducted a qualitative phenomenological study that included 16 in-depth, semi-structured interviews with applicants for pediatric residency training at the Radboudumc Amalia Children's Hospital (The Netherlands). Interviews were transcribed and anonymized. Thematic analysis was used to understand individual experiences and to identify patterns.

The experienced learning value could be divided into four themes: the assessment stimulated self-reflection, which could trigger learning conversations; it increased self-awareness of strengths and pitfalls, which lead to self-acceptance or development of learning goals; it increased self-awareness of motivational drivers, considered helpful in career choices; and it improved understanding of career requirements. In some applicants, however, learning remained implicit. This was influenced by three factors: applicants' views on the acceptability of this selection tool; applicants' perceptions about the credibility of the assessment and its results; and applicants' focus on selection or learning.

Selection for residency through an intelligence, personality, motivation and competency assessment can be a valuable learning experience for applicants. In some applicants learning remained implicit, likely explained by a focus on selection rather than learning or skepticism about its acceptability and credibility. Selection assessments should be explicitly presented as a learning opportunity and integrated in the learning curriculum to fully exploit their learning value. Future research should explore ways to support learning through selection assessments in residency training.

Date & Time
Monday, June 12, 2023, 1:15 PM - 2:15 PM
Location Name
MC - Maya 1&2