Poster Topic: Assessment

To view the poster, click on the image below the abstract.

101 - Analysis of the Relationship of Medical Student Personal Distress and Emotion Regulation using Machine Learning

Robert Treat
Medical College of Wisconsin
PURPOSE: Assertiveness education and training can be used to promote psychological wellbeing and a culture of respect in healthcare workers.¹ Assertiveness training and its evaluation has strong focus in the literature on nursing students,² but less recent reporting given to medical students. Assertiveness as a facet of emotional intelligence has been reported for medical students but with limited results.³ The purpose of this study is to analyze the impact of personality factors on medical student assertiveness as moderated by gender. METHODS: In 2017/18, 205/500 M-1/M-2 medical students voluntarily completed the Five Factor Personality Inventory (IPIP-50, scale:1=very inaccurate,5=very accurate) and Trait-Emotional Intelligence Questionnaire to measure assertiveness (TEIQue-sf, scale:1=completely disagree,7=completely agree). Independent t-tests and multivariate linear regression generated via IBM® SPSS® 26.0. This research approved by the institution’s IRB. RESULTS: Medical student assertiveness mean(±sd) scores (4.7±1.3) were significantly (p<0.001) above the instrument midline (=4) with 63% above 4.0. Female students mean scores (4.9±1.2) were higher than (p<0.051) male student scores (4.6±1.4). Female medical students: Assertiveness was significantly predicted (R²=0.3, p<0.001) by personality factors of conscientiousness (beta=0.3), extraversion (beta=0.4), and neuroticism (beta=0.3) using regression analysis. Male medical students: Assertiveness was significantly predicted (R²=0.3, p<0.001) by personality items of openness (beta=0.2), conscientiousness (beta=0.2), extraversion (beta=0.5), agreeableness (beta=0.2), and neuroticism (beta=0.2). CONCLUSIONS: Female medical students reported higher assertiveness scores than male students. The strongest personality predictor of medical student assertiveness was the factor of extraversion for both female and male medical students with the facets of friendliness and gregariousness having the highest impact. Male students have more personality traits influencing assertiveness including the two cognitive personality factors openness and agreeableness - depending on whether they have high or low openness/agreeableness scores. These two factors provided no influence for female student assertiveness.

102 - Analyzing the Impact of Personality on Autonomy and the Mediator Role of Motivation

Robert Treat
Medical College of Wisconsin
PURPOSE: Medical student personality¹ and resilience² has been reported to impact motivation. However, the dispositional aspect of personality as self-reported by trait measures suggests that students with lower personality scores will have reduced motivation. The stability of traits make this problematic since it’s challenging to change one’s personality over modest periods of time.³ Personal aspects of resilience such as having purpose could mediate the adverse effects of lower personality scores.

The purpose of this study is to analyze the impact of personality factors on medical student motivation as mediated by having purpose.

METHODS: In 2017/18, 205/500 M-1/M-2 medical students voluntarily completed the Five Factor Personality Inventory (IPIP-50, scale:1=very inaccurate,5=very accurate), RS-25 Resilience Scale (scale:1=strongly disagree,7=strongly agree) to measure purpose, and Trait-Emotional Intelligence Questionnaire to measure motivation (TEIQue-sf, scale:1=completely disagree,7=completely agree). Pearson correlations and multivariate linear regression generated via IBM® SPSS® 26.0. This research approved by the institution’s IRB.

RESULTS: Motivation (alpha=0.7) mean scores were significantly (p

103 - AWARD NOMINEE - Can formative assessments enhance student engagement and learning in a pandemic environment

Cindy Funk
Burrell College of Osteopathic Medicine
Use of Guided and Frequent Formative Assessments to Enhance First Year Medical Student Engagement and Learning in a Virtual Environment.
Cindy Funk, PH. D, Burrell College of Osteopathic Medicine, Las Cruces, NM
The value of formative assessments in education is well-documented. Due to the pandemic, many medical curricula adapted a virtual delivery. In order to engage students in this environment, we sought to examine the effectiveness of guided formative assessments on medical student learning.
During the COVID pandemic, the didactic curriculum for Burrell College of Osteopathic Medicine was delivered in a virtual-asynchronous format. In the Musculoskeletal I system, a series of online, formative quizzes was delivered via Learning-Catalytics for upper limb anatomy. Quizzes were composed of multiple choice, short-answer, matching, and identification questions, designed to provide active learning, feedback, and knowledge gap identification. “In-session” quizzes were provided during virtual lectures; at 15-minute intervals of lecture time targeting key concepts. “Post-session” quizzes were delivered after lectures to test higher order knowledge. Quizzes provided immediate, written feedback. ANOVA and correlation statistics were utilized to determine if quiz participation impacted summative exam performance.
There was robust usage of quizzes; students completed an average of 8/11 quizzes. To compare outcomes, students were divided into three groups: low engagement (completing 0-3 quizzes; 42 students), moderate engagement (completing 4-7 quizzes; 38 students), high engagement (completing 8-11 quizzes; 92 students). ANOVA revealed significant differences amongst these groups on summative exam performance (f=9.41624). T-test revealed significant differences in summative exam performance between high (84.8%) and low groups (78.75%). Positive correlation between summative exam performance and number of quizzes completed was also found.
Adapting to virtual environments has been difficult for many medical students; creating isolation and lack of academic direction. We demonstrate positive engagement and feedback with formative assessments; indicating their value in engaging students, supporting learning and preparing for summative assessments.

104 - OSCE: Objectively Scoring in the COVID Era

Danh Le
Resident Physician, Academic Medical Center
The Objective Structured Clinical Examination (OSCE) was developed over 40 years ago and is now a cornerstone of medical school assessment to ensure a high standard for clinical skills as students advance through the curriculum. Prior research has shown that both learners and educators feel the OSCE is a critical component of medical education. Given this, on March 26, 2020, the University of California-Irvine (UCI) School of Medicine (SOM) leadership determined that despite the COVID pandemic, it would continue to conduct the first- and third-year medical student OSCEs. To do so, the SOM adopted two distinct approaches: 1) on-site at the Clinical Skills Center (CSC) in a recorded environment with social distancing and 2) an entirely virtual encounter without recording but with an additional faculty evaluator. Overall, the history portion for both approaches was unchanged from prior, but the physical exam portion of each OSCE was challenging to standardize. The students were instructed to verbally explain the desired physical maneuver and rationale and SPs were instructed to give a verbal response.
We did not feel recording the OSCE sessions was paramount given the logistical barriers imposed by government restrictions in place at the time of assessment. Conducting the sessions to minimize the possibility of learner advance notice of the case was our critical concern to maintain fidelity of the exam. Future directions include assessing the OSCE formats we utilized, contrasting and comparing the methods in terms of safety in the COVID era and validity for quality assurance given that learner assessment at a distance will likely continue into the next academic year.

105 - Exploring students' perception of online OSCE: A qualitative study

Amitabha Basu
St Matthew's University School of Medicine
Purpose: Lehmann (2018) indicated that a hidden curriculum is lessons learned that are embedded in the culture of an academic organization and are sometimes responsible for behavioral changes. My study aims to explore the impact of the hidden curriculum in basic science medical students (44 Year 2 students), and their understanding of clinical life and various clinical specialities. Method: An anonymous online survey was conducted via SurveyMonkey. The survey asked students their understanding of hidden curriculum; what else they learned outside their syllabus; if clinical shadowing would impact the choice of residency; how the curriculum influenced them as a person; and understanding of professionalism. Students needed to reply in a textural format. The work is continuing. Result: Despite 75% of respondents did not know what is hidden curriculum; the students reportedly learned the importance of punctuality, study techniques, time management, professionalism, teamwork, communication skill, and early understanding of clinical life in various clinical specialities. Students felt they received unconditional support and respect from professional professors that increased their confidence. Some felt the 'medical field' has certain sets of ethics and professionalism that did not match their early understanding and occasionally standard set for students that were not always matched by the professors. Conclusion: More work if needed to understand the impact of the hidden curriculum. Educational leadership must ensure that a physiologically safe, friendly, professional environment exists in the institution, which can be established and monitored by faculty-staff training, acting on student feedback, and regularly reviewing the curriculum. The inclusion of clinical shadowing for 2nd-year medical students would help students to gain awareness of different clinical specialities in real-time medical environments.

106 - AWARD NOMINEE - Determining The Relationship Between Gap Years, First Year Medical School Performance, and Academic Burnout

Abdulai Bangura
Trinity Medical Sciences University - School of Medicine
There is a high prevalence of burnout among medical students. Students who experience burnout are more likely to have lower academic performance. Could student gap year participation help reduce student burnout or increase academic performance at the medical school level?

Second-year medical students participated in a two-part survey. The primary survey required general information disclosure including gap year participation and pre-health clinical experiences. The secondary survey consisted of the Maslach Burnout Inventory-Student Survey (MBI-SS). Student GPAs were de-identified and provided by the school's faculty. 

Of the 60 responses, 37 (62%) medical students participated in a gap year after their undergraduate education while the remaining 23 (38%) students immediately matriculated into medical school. Pre-health clinical experiences were acquired by 40 (67%) students. We were unable to detect a difference in mean GPAs between students with one or more gap years (3.4, SD 0.59) and of students with no gap years (3.3, SD 0.48) (P = 0.55). Even when considering the wide range of gap years and separating the students into three groups (0 gap years, 1-2 gap years, 3+ gap years), differences between GPAs were still not found (P = 0.83). There were no correlations found between gap years and any component of the MBI-SS; exhaustion (r = 0.103, P = 0.593), cynicism (r = 0.055, P = 0.775), and academic efficacy (r = 0.166, P = 0.387).

There is variation in the number of years students spend in between their undergraduate education and medical school. Deferring medical school is likely explained by an increased demand for clinical experience and admission refusals. Our study explored the effect of education deferment on medical school academic performance and burnout, and it was unable to identify a difference in students’ GPAs and burnout risk when comparing their gap years. Our work highlights opportunities to better understand and further evaluate the impact of student burnout at the medical school level.

107 - The Impact of Gender on Resident Evaluations of Faculty Performanc

Allison Beaulieu
The Ohio State University Wexner Medical Center
Background and Objectives: A significant gender gap exists in academic medicine. Implicit bias impacts evaluations of female physicians in residency training and continues to influence factors pertaining to advancement as academic faculty. The goal of this study is to determine if faculty gender impacts the evaluations of academic faculty by residents within the specialty of emergency medicine.

Methods: A mixed methods analysis will be employed to examine 14,669 teaching evaluations of faculty by residents at a single academic center between the years 2017-2020. Anonymized ratings of male and female faculty on a five-point Likert scale will be compared using chi-square test. Qualitative analysis of free-text will be analyzed using grounded theory to examine narrative evaluations for gendered language.

Results: We expect to report a quantitative comparison between evaluations of faculty with respect to gender. We plan to perform subset analyses based on academic rank. We anticipate reporting qualitative outcomes in the form of major themes which emerge during analysis as well as a comparison of narrative evaluations between male and female faculty. Based on prior literature on gender differences and their impact on teaching evaluations, we expect to find gender differences in quantitative ratings of faculty as well as differences in qualitative analysis with respect to learner expectations and cited areas of strength for faculty.

Conclusion: Implicit gender bias has previously been determined to impact evaluations of faculty. These disparities negatively impact promotion and tenure for female faculty. If there is found to be gender bias in the assessment of academic female faculty, evaluator training to mitigate implicit gender bias can be pursued to close the gender gap in academic emergency medicine.

108 - How to measure "Fit": Standardized comparison of program and applicant alignment of values and priorities

Kelly Dore
Altus Assessments/McMaster
PURPOSE:  NRMP surveys of programs and applicants identified "fit" as an important selection factor. However, little consensus exists on what "fit" is or its influencing factors. This study examines factors in applicant/program alignment in GME selection through perspectives of key SMEs.:  METHODS:  A Delphi survey was conducted in 2 rounds, consisting of 47 factors. Diverse SMEs participated in the Delphi and ranked factors on a Likert-type 4 point scale. RESULTS:  38 SMEs responded in the survey's 1st round, 34 of 47 factors reached consensus with a 55% consensus threshold. 4 additional factors reached consensus in the 2nd round. After integrating SME feedback, 30 factors remained. Factors were organized into 3 themes: culture, pedagogy and work environment.:  CONCLUSION Survey results will be used in a paired-comparison tool for GME selection in Fall 2020 across multiple programs. This standardized method of evaluating "fit" provides insight at the time of GME selection.

109 - Implications of using an SJT in admissions for predicting future professionalism issues

Kelly Dore
Altus Assessments/McMaster
PURPOSE:  Admissions challenges include predicting student professionalism using traditional non-academic metrics. A PGME program piloted the use of the SJT, Casper, to measure non-academic qualities for selection.:  METHODS:  This study compared institutional impact of resident performance before and after SJT implementation. PGY-1 cohorts before (2015/2014: n = 234) and after (2017/2018 n = 237) SJT implementation were analyzed to compare professionalism, remediation, and associated costs. RESULTS:  Control trainees had 12 professionalism concerns, 5 interventions and 5 remediations with 7 trainees with low non-MK ratings. CASPer-assessed trainees included 3 professionalism concerns, 1 intervention and 2 remediations with 3 trainees with low non-MK ratings. Total cost savings post-SJT implementation was $119,754.72 CAD. CONCLUSION Results support including Casper to measure professional attributes in addition to existing metrics as an indicator of in-program professionalism.

110 - Toward The Development And Construct Validity Of The 7ps Inventory Of Self-regulated Learning To Identify Student Academic Success

Daria Ellis
Ross University School of Medicine
Toward the Development and Construct Validity of the 7ps Inventory of Self-Regulated Learning to Identify Student Academic Success
Daria Ellis PhD, Priyadarshini Dattathreya MBBS, MD, and Maureen Hall MD, MEd, BSc

PURPOSE: Self-regulated learning has been identified as a key factor that determines academic success. We used the principles of self-regulated learning and developed the 7Ps Inventory of metacognitive strategies that break down the task of ‘learning’ into strategies include planning and organizing, self-monitoring and evaluating academic progress through self-reflection. The purpose of this study is to establish the construct validity of the 7Ps Inventory.

METHODS: We conducted an initial psychometric validation of the 7Ps Inventory with 500 medical students. Exploratory (EFA) and confirmatory factor analyses (CFA) were conducted to assess the latent structure of 7Ps Inventory. Findings highlighted areas where the 7Ps Inventory required revision. Following revision, we conducted another psychometric validation with an additional sample of 191 first year medical students. We used EFA and CFA to assess construct validity and we assessed reliability using Cronbach’s measure of internal consistency.

RESULTS: The final version of the 7Ps Inventory comprised 26 items (rated on a five-point response scale) that correspond to seven discreet components: Plan, Prepare, Participate, Process, Practice, Performance and Pause. The models for the final revised scales had good fit and the internal reliability of these scales was marginal to excellent, with Cronbach’s α ranging from 0.52 to 0.86.

Our preliminary evidence suggests that the 7Ps Inventory to help students reflect on their learning and assess their use of specific learning strategies is a psychometrically robust tool.

111 - AWARD NOMINEE - Comparing Online And In-person Educational Workshops For Canadian Occupational Therapists: Exploring The Learning Experience

Sungha Kim
McGill University
The Do-Live-Well (DLW) framework is a health promotion approach that many occupational therapists (OTs) are interested in learning about. Although online education has become increasingly popular among health care professionals, studies of its effectiveness and learners’ experience have been limited in occupational therapy education. The objectives of this study were to compare the effectiveness of the online and in-person DLW workshop for Canadian OTs and to explore participant experiences in both types of workshops.

An explanatory sequential mixed-methods study design was used. In the quantitative phase, descriptive and inferential statistics were used to compare the effectiveness of the two educational methods at three points (pre, post, and 6-month follow-up). The primary outcome was knowledge change, and the secondary outcomes were changes in factors influencing the use of DLW in practice, satisfaction with the workshops, and the actual use of DLW. In the qualitative phase, an interpretative description methodology was used. Semi-structured one-on-one interviews conducted at follow-up were transcribed and analyzed using a six-step analysis process.

There were no statistically significant differences between groups in knowledge changes at three time points (p > 0.57 – 0.99). There were statistically significant differences between groups in factors influencing DLW adoption (p > 0.001) and satisfaction with the workshop (p > 0.0005) at the post-test. Five themes were identified in relation to learners’ workshop experience: (1) synchronous in-person interaction, (2) flexibility in online learning, (3) ease of access to learning, (4) comfortable learning environment, and (5) relevance to practice and interest.

There were no statistically significant differences between the groups in most of the quantitative data, and participants identified each method’s benefits and challenges. The findings indicate online learning can be as effective as in-person learning. However, combining both methods’ positive aspects may improve learners’ educational experiences.

112 - Do graduating US students have the skills to perform the Association of American Medical Colleges (AAMC) Core Entrustable Professional Activities for entering residency (Core EPAs): Analysis of the national AAMC 2019 Graduation Questionnaire (GQ)

Douglas Grbic
Due to the COVID-19 pandemic, medical schools and specialty organizations responded to the disruption of medical students’ away-rotation opportunities by supplementing standard (i.e., in-person) away rotations [ipARs]) with virtual away rotations (vARs). Using data from the Association of American Medical Colleges (AAMC) 2021 Graduation Questionnaire (GQ), the authors described the characteristics of graduating students who participated in each of ipARs and vARs in the early COVID era. Results showed significant differences by specialty and medical school type in ipARs and vARs that aligned with the Coalition for Physician Accountability recommendations regarding limitations of ipARs in the 2020-2021 academic year.

113 - Medical Students' Perceptions on Changing Osteopathic Manipulative Medicine Lab Practical Assessment Styles

Yen-chung Wang
Edward Via College of Osteopathic Medicine at Auburn
Medical Students’ Perceptions on Changing Osteopathic Manipulative Medicine Lab Practical Assessment Styles
With the development of Osteopathic Core Competencies and Core Entrustable Professional Activities (EPAs), there has been a shift towards competency-based curriculum in osteopathic medical education. Medical students at the Edward Via College of Osteopathic Medicine-Auburn campus (VCOM-Auburn) utilizes the traditional, randomized Osteopathic Manipulative Medicine (OMM) practical testing style; and they were surveyed on their preferences towards an OMM practical assessment modality to determine how receptive students would be to curricular change. The study evaluated the learning and assessment preferences of first- and second-year osteopathic medical students (OMS I & OMS II) at VCOM-Auburn.
Participants, regardless of sex, age, race, or academic achievement, were recruited using class announcement and emails. A 6-question anonymous and voluntary survey was conducted via iClicker to evaluate perception and readiness for change in OMM curriculum and assessment formats.
Out of the 308 enrolled first- and second-year students, 243 responded (78.9%). Study results found that OMS I and OMS II students selected similar choices for each question, and most students preferred the current traditional OMM practical testing style over competency-based testing. However, there is a significant difference in the proportion of student satisfaction and testing preference between OMS I and OMS II classes; satisfaction with the current practical setup decreased from 82% among OMS I’s to 68% among OMS II’s, x2(1)=5.114, p=0.024.
Data suggests that OMS I and OMS II at VCOM-Auburn are satisfied with the current traditional practical assessment with a significant decrease with the increase in seniority. Previous experience in traditional assessment may be a factor. The preference to utilize competency-based learning as medical education progresses predicts that students and residents with increased medical education experience acknowledge the importance of a more interactive and flexible curriculum. Therefore, this a is relevant consideration with the changes to the Single Accreditation System for Graduate Medical Education.

114 - Effects of organ-system courses of the first two years of medical school on performance of COMLEX-USA Level 2

Kevin McNeil
Rocky Vista University College of Osteopathic Medicine
Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA) Level 2-Cognitive Evaluation (COMLEX-USA Level 2-CE) is a board examination that every medical student in an osteopathic medical school must pass to graduate. Students usually take it in the third or fourth year of medical school. A few researchers have investigated the relation between performance in preclinical/clinical sciences and performance on COMLEX-USA-Level 2-CE, but there is no study on the influence of each organ system course during the preclinical years on COMLEX-Level 2-CE performance. We aimed to investigate the relationship between each organ system course and performance on COMLEX Level 2-CE. Our findings will help students focus on important basic sciences much earlier before preparing for COMLEX-USA Level 2-CE. Academic data from students matriculated at Rocky Vista University College of Osteopathic Medicine from 2011 to 2017 were obtained. Data included pre-admission MCAT scores, course grades in the first two years of medical school, first attempt COMLEX Level 1 scores, and COMLEX-USA Level 2-CE scores. Pearson correlation coefficients, multiple linear regression, and backward step-wise regression were run with Sigma Plot 14 software. The highest correlation with COMLEX-USA Level 2 is the score on COMLEX Level 1; the next highest are third semester Cardiovascular System (CVII) and Renal System II (RENII) courses. Multiple linear regression shows that only the average score in all year-2 courses is a significant predictor of performance on COMLEX-USA Level 2; the average score in all year-1 courses is not significant. Backward stepwise regression shows that MCAT scores, third semester CVII, RENII, Respiratory System (RSII), Principles of Clinical Medicine III (PCMIII), and fourth semester Neuroscience System II (NSII) courses are significant predictors. In conclusion, performances in third semester courses are the most important predictors of scores on COMLEX-USA Level 2-CE.

115 - Early Prediction of the Risk of Scoring Lower than 500 on the COMLEX 1: A Study of Pre-Matriculation MCAT scores and Pre-Clinical Grades at an American Osteopathic Medical School

Qing Zhong
Rocky Vista University
Osteopathic Medical Licensing Examination of the United States (COMLEX-USA) Level 1 (COMLEX-USA Level 1) and Level 2- Cognitive Evaluation (CE) are board examinations that each medical student in an osteopathic medical school must pass as part of the licensure requirements. Numerical scores on Level 2 CE are also important in the competitive residency match.
Our goal is to find the earliest predictors for performances on COMLEX Level-1 and Level 2-CE.
Data from six cohorts of medical students matriculated at Rocky Vista University College of Osteopathic Medicine from 2012 to 2017 were collected, including independent variables of performances on each course from the first two years, and the dependent variables were the scores on COMLEX Level 1 or Level 2-CE. Predictive models were built with multiple linear regression and backward stepwise regression using SPSS. Predictive models for COMLEX Level 1 were based on performances of the first three semesters’ courses, and for COMLEX Level 2-CE were used performances on the first four semesters’ courses.
We found that the performances of third-semester Renal System II and Cardiovascular System II courses had the highest correlation with the scores of COMLEX-USA Level 1(r=0.7), and Level 2-CE (r=0.64-0.65), respectively. Performance on either Renal System II course or Cardiovascular System II course explains 49% of the variance in COMLEX-USA Level 1 scores, and 41-42% of the variance in COMLEX-USA Level 2 scores. The predictive regression models confirmed that scores of Renal II and Cardiovascular II are significant predictors of performances on COMLEX Level 1 and 2-CE.
Students who perform poorly in third-semester Renal System II and Cardiovascular System II courses are at high risk of lower performance or failure on COMLEX Level 1 and 2-CE. The results may allow earlier interventions to improve students’ learning and performances.

116 - Influence Of Mcat Retesting On Performance Of Preclinical Medicine And Comlex-usa Level-1 And Level 2-ce

Anton Pham
Rocky Vista University, College of Osteopathic Medicine
The Medical College Admission Test (MCAT) is utilized as one of the preadmission variables by the medical school admissions committees in the selection of students since 1928 in United States. Students are permitted to retake the MCAT up to three times in one calendar year and four times across two calendar years, with a maximum of seven attempts in their lifetime in order to maximize their score. The MCAT score is used as a predictor of how well a student can perform in medical school, with extensive research investigating the relationship between MCAT scores and preclinical performance as well as medical board examinations, yet sparce research have focused on the effects of retaking the MCAT. Furthermore, there has been no exploration into the influence of retesters’ MCAT scores and the number of MCAT attempts on COMLEX Level 1 and Level 2-CE in literature.
Our goal was to investigate whether MCAT retaking affects the performance of preclinical courses and board examinations.
Data from 904 students who matriculated at Rocky Vista University College of Osteopathic Medicine during 2012-2017 included MCAT scores on first attempt, second attempt, third attempt, fourth attempt, preclinical course scores, and first attempted scores of COMLEX Level 1 and Level 2 CE. One-way ANOVA, X2 test, and Pearson correlation coefficient were performed.
The analysis revealed that compared to non-retesters, retesters had a significantly lower first-time and average MCAT scores, with the lowest seen in those who retook it four times. In addition, scores of COMLEX Level 1 in retesters who took the MCAT four times were significantly decreased compared to that in non-retesters.
Increased attempts of the MCAT negatively influenced performance of COMLEX Level 1 and Level 2-CE.

117 - The Role of Examination Rankings in Medical Students' Experiences of the Impostor Phenomenon

Thomas Franchi
The University of Sheffield
The term ‘impostor phenomenon’, used to “designate an internal experience of intellectual phonies”, was first coined by Clance and Imes in 1978. Those who experience this have profound thoughts of fraudulence regarding their professional or intellectual activities. This perception of illegitimacy causes sufferers to credit their success to error, blocking high achievers from acknowledging their successes and hindering development in self-esteem.

This research aimed to uncover and explore the relationship between medical students and the impostor phenomenon. An ethics-approved action research project was completed at The University of Sheffield, using a pragmatic approach which integrated quantitative and qualitative data from a questionnaire, focus groups and interviews. The main quantitative measure was the Clance Impostor Phenomenon Scale (CIPS), which produces scores between 20-100.

There were 191 questionnaire responses, and 19 students joined a focus group or interview. With a mean CIPS score of 65.81 ± 13.72, the average student had “frequent” impostor experiences. “Clinically significant” CIPS scores were recorded in 65.4% of students, and on average females scored 9.15 points more than males (p

118 - Do School-Based USMLE Testing Centers Provide a Home-Field Advantage?

Pamela Ocallaghan
University of South Florida Morsani College of Medicine
The onset of the COVID-19 pandemic forced commercial testing centers to close worldwide, leaving medical learners unable to complete their USMLE Step exams. In response, six U.S. medical schools were selected to create secure spaces and train staff to administer medical licensure exams. This abstract describes examinee feedback from learners who took their USMLE Step exam at the University of South Florida (USF) regional testing center.
A four-question survey was sent to 175 examinees that tested at the USF regional testing site. Questions evaluate the following objectives: whether the testing center and staff fit the examinees’ expectations, whether the option to test at the USF center lowered examinee stress, and whether examinees’ perceptions utilized the USF test center improved their performance. Response options included yes, no, or neutral. Chi-squared test of independence assessed differences in survey responses (p

119 - Using Discriminant Analysis To Assess The Validity Of A Predictive Regression Model For Identifying Students At Risk Of Failing Usmle Step 1

Andrea Vallevand
Wake Forest School of Medicine
Andrea Vallevand, Brooke Shipley and Yenya Hu, Wake Forest School of Medicine, Winston Salem, NC 27101 USA.

The accurate detection of students at academic risk can permit the deployment of interventions, such as the deliberate use of question banks, to identify and patch knowledge gaps. A priori detection is particularly critical for licensing examinations, where failure may impact residency aspirations. The current research explores the validity of a regression model employed to predict the risk of Step 1 failure.

Preclinical Customized Assessment System examination and Step 1 scores were collected from three cohorts. Regression analysis was conducted and a roster of predicted Step 1 scores calculated for the subsequent cohort. The USMLE passing score and documented standard error of measurement (194 and ±6, respectively) informed an “at risk range”. Students identified “at risk” were offered additional academic coaching during the dedicated Step 1 study period. Discriminant analysis, employed retrospectively, investigated Step 1 pass/fail results and the accuracy of the regression analysis.

The regression analysis predicted scores, categorized as “at risk”, for 8/127 (6.3%) students. Predicted scores ranged between 186 and 203. Of these eight students, four took academic leave, three failed Step 1 and one passed.
Among the 117 students who took Step 1, during the designated examination cycle, five of them (4.3%) failed. The discriminant analysis accurately predicted the five students that failed. Three of these students were flagged as Step 1 risks initially, by the regression analysis, with their respective predicted scores ranging between 186 and 199.

Regression analysis has provided our academic coaching program with a frame of reference for where along the pass/fail continuum students are located at the start of the dedicated Step 1 study period. Discriminant analysis is used to retrospectively validate these initial predictions, particularly when students do not engage in academic coaching.

120 - Effect Of Covid-19 Pandemic On Comlex Level 1 Performance

Sarah McCarthy
Lake Erie College of Osteopathic Medicine
Due to the COVID-19 pandemic many medical students postponed taking their COMLEX LEVEL 1 board exams in 2020. This was due to quarantine-related delays and closures at testing sites. Typically, LECOM students take the COMLEX Level 1 examination in May or early June of their second year, however in 2020 the testing site closures and delays caused most students to delay their exam, ranging between May and August.   To determine whether exam site closures impacted Level 1 scores of LECOM Erie students, we analyzed Level 1 scores and the date the examination was taken in 2020 compared to 2019. We hypothesize that the delay in test dates for the NBOME COMLEX Level 1 exam did not result in a reduction in mean score.  We observed the following changes of the percent of students taking their Level 1 exams in May – July comparing 2019 to 2020: 58% vs 24% (May), 30% vs 43% (June), and 8% vs 23% (July); respectively. The remaining students sat for the exam in August of 2020 or later. Additionally, we analyzed the changes in normalized mean scores in May – July comparing 2019 to 2020; the overall average score on COMLEX Level 1 was not significantly different (1.05 +/- 0.01 vs 1.03 +/- 0.01). Although, the average monthly score on COMLEX Level 1 was higher in 2019 compared to 2020. Strong students tested throughout May-July in 2020. Overall, despite significant environmental challenges, student performance was not significantly different on the exam.

122 - Convergent Validity Of A Revised Teamwork Assessment Tool

Kathryn Kerdolff
Louisiana State University Health Sciences Center - New Orleans
Teams are a foundational component of healthcare delivery. Having a reliable, valid, efficient, and effective method of evaluating team function is essential to improving team performance. As part of an International Association of Medical Science Educators’ educational grant, we attempted to develop a quantitative measurement suite for assessing teamwork.
We employed a quasi-experiment pre-/post-intervention comparison design to assess inter-professional student teams participating in the Student Operating Room Team Training (SORTT) curriculum. Teams of nurse anesthesia, senior medical, and senior undergraduate nursing students completed a dual scenario session with immediate after action debriefing focusing on team-based competencies. Evaluation of team performance involved both quantitative measurement as well as observer-based evaluation using the Quick Teamwork Assessment Scales (Q-TAS), a 5-item, 3-subscale tool using a 6-point Likert-type scale ( 1=definitely no to 6=definitely yes). Changes in quantitative measurements from scenario 1 to scenario 2 were determined and compared to mean item changes in Q-TAS ratings.
In 2020, Q-TAS evaluation of 49 students divided into 7 simulated OR teams occurred. Statistically significant improvements were present in all 3-subscale ratings. Due to the unavailability of the sociometric badges included in the original protocol, an alternatively developed quantitative measurement suite incorporated dosimeters, radio frequency identification badges, and video recordings. Data analysis has proven challenging due to limitations in the manner in which data collection and recording occurs with the instruments. In addition, the sheer volume of data has proven difficult to sort out. Work continues related to overcoming these issues.
The SORTT program is effective in improving student team performance. The successful creation of a quantitative measurement suite for team function must take into account the manner in which data collection occurs, data presentation happens, and the volume of data involved.

123 - Affirming Institutional Assessment Equity with Differential Item Functioning (DIF)

Ryan Mutcheson
The purpose of this study was to revise the eligibility criteria for Phase-I Clinical Science Domain Letters of Distinction using a methodology such that they are available to all students who meet the defined criteria. More specifically, we sought to classify student performance distinction based on demonstrated competencies against established benchmarks, rather than on performance in comparison to peers.
Research on performance evaluation highlights the importance of using multiple measures to develop accurate and reliable profiles of student performance. The VTCSOM Clinical Science Domain assesses students in Phase I by creating compensatory composite domain scores consisting of multiple weighted measures of student performances, including Multiple Choice Assessments; Interview and Physical Exam Performance; Communication and Interpersonal Skills Performance; Written Presentation Skills; Clinical Reasoning Skills.
In this study, we conducted a review of ten standard-setting methods, comparing their advantages and disadvantages. Next, we selected a standard-setting method and conducted a standard-setting study with subject-matter experts. Prior to conducting the Hofstee standard-setting study, we aggregated and reviewed the cumulative distribution functions of three years of clinical science compensatory composite domain scores. Subject matter experts used knowledge of these distributions and the prescribed compromises to ultimately determine Clinical Science Letters of Distinction thresholds. After establishing the thresholds, we applied the results of our standard-setting study to classify performance for VTCSOM Phase-I Clinical Science Domain Distinction.
Standards are more credible if they produce appropriate classification information and are sensitive to candidate performance and content. Standards must also be statistically sound and identify the “true” standard. Given the need to establish several standards across the curriculum, standards should be relatively easy to implement and compute. The review of standard setting methods helped us establish which of the empirically-based standard-setting methods was appropriate to apply to the Clinical Science Domain composite scores.