Poster Abstracts: Assessment
Looking for a different abstract category? Click the links below!
To quickly find your assigned poster number press ctrl+F then type the title of your presentation to search the entire page.
Posters
- (100s) Assessment (you are here!)
- (200s) Curriculum
- (300s) E-Learning
- (400s) Instructional Methods
- (500s) Other
- (600s) Student Support
- (700s) Technology and Innovation
- (800s) TBL/PBL
Presented By: Jenny Fortun, Florida International University Herbert Wertheim College of Medicine
Co-Authors: Christopher Day, Florida International University Herbert Wertheim College of Medicine
Aaron Gomez, Florida International University Herbert Wertheim College of Medicine
Juan Manuel Lozano, Florida International University Herbert Wertheim College of Medicine
Ligia Perez, Florida International University Herbert Wertheim College of Medicine
Justin Shaw, Florida International University Herbert Wertheim College of Medicine
Purpose
Diagnostic Reasoning (DxR) exams are progressive disclosure case-based assessments that aim to mimic clinical encounters. While studies have explored the relationship between examination formats and student performance, associations between preparatory strategies and performance remains unclear. Understanding study approaches employed by students for DxR and NBME examinations has the potential to inform future course development and curriculum design.
Methods
A retrospective cohort study was performed via secondary analysis of data obtained from three cohorts of second-year medical students through end-of-course surveys for three preclinical courses. The surveys explored students' preferences of using a top-down or bottom-up approach to prepare for DxR and NBME examinations. A top-down approach was defined as starting with patient presentations and then learning about diseases. A bottom-up approach was defined as starting with diseases and then reviewing correlating patient presentations. Frequencies and 95% confidence intervals were calculated for students' preferred study approach by examination type. McNemar tests and odds ratios were used to assess differences overall and stratified by course and cohort. Finally, we compared grades achieved by students of all cohorts and courses on DxR and NBME exams according to the preferred study approach.
Results
855 unique survey responses from 332 students were included. Students preferred the top-down over the bottom-up approach to prepare for DxR compared to NBME examinations (OR: 2.14, 95% CI: 1.53 to 3.00, p<0.001). These findings were consistent in stratified analyses by course and cohort. Students scored similarly on DxR and NBME exams regardless of preferred study approach. We observed no difference in study approach across quartiles for grades obtained in DxR and NBME examinations.
Conclusion
Students preferred a top-down over bottom-up study approach when studying for DxR examinations in comparison to NBME exams. However, no association was found between students' study approaches and exam performance in either testing modality.
Presented By: Robert Treat, Medical College of Wisconsin
Co-Authors: Gauri Agarwal, University of Miami Miller School of Medicine
Purpose
Artificial intelligence (AI)-related technology deployment has begun in medical education, but it is often implemented in a fragmented and siloed approach. Medical faculty and students may prioritize the importance of these implementations differently, and this should be analyzed to assure a consistent set of outcomes. AI-technologies such as natural language processing (NLP) using generative pre-trained transformers (GPT) can assist in the construction of self-reported surveys, while the data can be analyzed with machine learning approaches such as factor and regression analysis. The study's goal is to validate GPT-augmented faculty and student surveys on the importance of AI-related technologies in medical education.
Methods
AI-based QuillBot abstracts and extracts key findings from scholarly literature and two medical education conference panel sessions. Utilizing this software, we first explored potential constructor variables for medical school and faculty self-report questionnaires. GPT 3.5 provided individual survey items from the QuillBot summaries, which were content validated by the authors to create a 20-item survey (scale:1=not important/5=extremely important) with one overall item (10-point scale) analyzed with SPSS 28.0 with t-test, Cohen's d effect size, linear regression and factor analysis (with varimax rotation). The study is IRB approved.
Results
Forty participants reported significantly (p=0.043, Cohens d=0.33) higher faculty (7.8(±1.6)) than student (7.3(±1.6)) overall importance scores. Factor analysis (KMO=0.70, Bartlett's sphericity: chi-square=429, p=0.001) yielded a six-factor solution (student learning/communication/medical data/patient issues/clinical encounter/education resources) from 20 items (alpha=0.90) with communication (beta=0.51, p=0.001) and medical data (beta=0.49, p=0.001) being significant predictors of overall importance in regression analysis (R²=0.70, p=0.001).
Conclusion
A reliable AI in medical education survey was created from QuillBot summaries of peer-reviewed conference panel sessions and GPT. There was a higher level of AI importance reported by faculty over students. The survey was validated with an internal structure of six factors, two of which predicted the overall importance of AI in medical education.
Presented By: Peyton Sakelaris, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Co-Authors: Miriam Borvick, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Kencie Ely, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Gemma Lagasca, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Daniel Levine, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Dale Netski, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Edward Simanton, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Jennifer Young, Kirk Kerkorian School of Medicine at University of Nevada, Las Vegas
Purpose
Historically, the USMLE Step 1 examination has served as a metric for determining residency candidate competitiveness and for program directors in identifying exceptional candidates. Nevertheless, the emphasis on Step 1 scores required that students sacrifice a well-rounded medical school experience to excel in this single assessment. To address this, Step 1 was transitioned to a pass/fail grading system with hopes that residency programs would reprioritize what was important on applications. Initial observations suggest this transition has had conflicting consequences. In light of this unpredictability, this study will explore the impact of the pass/fail format on medical students at the Kirk Kerkorian School of Medicine at UNLV (KKSOM).
Methods
The student data from KKSOM that will be assessed includes volunteering hours, research participation, and self-reported stress levels. Analysis of variance (ANOVA) will be performed to identify any substantial distinctions between cohorts. A correlation analysis will be used to discover the relationship between student stress levels and the time allocated to academic pursuits. To investigate the linearity between these variables, a regression analysis will be performed to chart the data points and identify any trends.
Results
The anticipated findings will be an increase in volunteering and research. Simultaneously, the stress reduction originally anticipated will instead be shifted to Step 2, along with the uncertainty of what residency program directors will now be looking for on applications.
Conclusion
It is expected that the trend towards increased volunteering and research will continue to rise among future cohorts. It is also possible that the perceived stress levels may stabilize as residency programs refine the extracurricular items that they deem the most advantageous for future applicants.
Presented By: Anamika Sengupta, University of Illinois College of Medicine
Purpose
21st century medical education is rapidly implementing integrated curriculums to emphasize interdisciplinary connections. By this virtue of integrating basic and clinical sciences (within lectures and small group sessions), these curriculums have spawned development of integrated and interdisciplinary assessments. Existing literature suggests these assessments indeed help medical students develop essential critical thinking and problem-solving skills for future preparedness in the rapidly evolving world of modern medicine. This study aims at discussing the steps involved in creating such assessments via interdisciplinary faculty collaboration and evaluating the long-term impact of such assessments on student preparedness.
Methods
The section describes the invested efforts made in a Texas-based osteopathic medical school toward integrating basic and clinical sciences across all evaluations (formative, cognitive and small group activity-related exams) of their integrated and student-centered curriculum. It details the processes involved in developing common vignettes that could accurately assess closely-related scientific and clinical objectives, with emphasis on their interdisciplinary connections through focused and concise examinations.
Results
This section describes students' initial challenges in response to the integrated assessment items and how with time, training and continued collaboration (alongside content faculty and academic advisors), their performance improved significantly. This progressive achievement, in addition to being a testament to their preparedness for board exams, also demonstrated their holistic development as versatile physicians of the future.
Conclusion
Integrated assessment in a medical curriculum seeks to synthesize information from diverse disciplines of medicine, different stakeholders (patients, physicians, insurance and pharmaceutical companies) and data sources to offer a platform for a more holistic approach to the human system and to the treatment of associated diseases. Within those boundaries, medical students would grow predictably versatile with greater appreciation of the human body in addition to overcoming interdisciplinary barriers and developing critical thinking and problem-solving skills as impending 21st century physicians.
Presented By: Piper Cramer, Western Michigan University Homer Stryker M.D. School of Medicine
Co-Authors: Peter Vollbrecht, Western Michigan University Homer Stryker M.D. School of Medicine
Purpose
Communication is recognized as a critical skill for physicians yet it remains challenging to teach and evaluate. One difficulty is evaluating intervention effectiveness. Currently, the field largely relies on self-reported confidence as a measure of intervention success. If used, objective evaluation tools for examining communication skills in medical students are often built for a specific event, making it difficult to measure progress longitudinally. Here, we introduce a rubric developed intentionally to be used in a range of situations including outreach, medical education, and patient encounters.
Methods
Researchers examined a number of communication rubrics across a range of academic fields and for use in a variety of settings including OSCEs, undergraduate science and communication courses, and foundational medical education. Using existing tools as inspiration we created a rubric applicable to a variety of settings and therefore capable of longitudinal monitoring of student progression.. The validation process for this survey is ongoing and is meant to ensure that collected data and evaluator interpretation of the rubric is consistent. To do this, three short clips of different foundational sciences lectures are evaluated by a set of 4 individuals. All rubric scores are then compared and discussed during a focus session. Further validation is ongoing as the rubric is used to evaluate student communication skills during outreach events. This process uses a 360 evaluation with the rubric being completed by the student, by the teacher, and a faculty member.
Results AND Conclusions
Following successful rubric validation, we hope this tool can be used to objectively evaluate the effectiveness of communication skills interventions. While developed with medical students in mind, we hope that this rubric will be utilized more broadly to effectively evaluate communication skills. This tool will provide a less subjective measure of intervention effectiveness, moving us beyond participants' self-reported communication confidence.
Presented By: William Alley, Wake Forest School of Medicine
Co-Authors: Janet Tooze, Wake Forest School of Medicine
Andrea Vallevand, Wake Forest School of Medicine
Catherine Wares, Atrium Health Carolinas Medical Center
Purpose
Unstructured oral examinations (OE) have been criticized for lack of reliability and the potential for bias, but structured OEs may be valuable tools for evaluating clinical skills and reasoning and mitigate subjectivity and unintended bias towards URM students. We aim to retrospectively investigate Emergency Medicine (EM) OE scores between students self-categorized by gender as well as Underrepresented Minority (URM), Asian Pacific Islander (API), or White for evidence of unintended bias.
Methods
Multiple clinical cases for two common chief complaints were developed by board-certified EM faculty. Each case is highly structured and standardized, including the final diagnosis, time allotted, and critical components. Faculty assessors were oriented to the cases and assessment tool. Every EM clerkship student is required to take the OE, which consists of two 15-minute cases. Scores converted to a 0.0-4.0 scale. OE data were analyzed using factorial analysis of covariance and Levene's test (independent variables: race and gender; covariates: Step 1 score and rotation number).
Results
Data from seven cohorts were analyzed (n = 806: range 96 to 135/cohort). Self-reported race and gender demographics: API = 164(20.3%); URM = 131(16.3%); White = 511 (63.4%); Male = 396(49%); Female = 411(51%). Step 1 performance was significantly related to OE scores on six of seven analyses (all p ? .03), EM rotation timing on two of seven analyses (p = .002 and .035). There were no statistically significant differences for the main effect of race. The main effect of gender was significant for one cohort (p ? .011, estimated mean: female 2.88 and male 2.72).
Conclusion
Integrating structured, standardized oral examinations can provide a valuable method for assessing EM clerkship students without introducing unintended bias.
Presented By: Stephen Peterson, Touro University Nevada College of Osteopathic Medicine
Co-Authors: Erika Assoun, Touro University Nevada College of Osteopathic Medicine
Casaundra Krob, Touro University Nevada College of Osteopathic Medicine
Terrence Miller, Touro University Nevada College of Osteopathic Medicine
Jennifer Obodai, Touro University Nevada College of Osteopathic Medicine
Anne Poliquin, Touro University Nevada College of Osteopathic Medicine
Purpose
Touro University Nevada (TUN) provides a COMBank question bank to first-year osteopathic medical students (OMS1s). We initiated a project that succeeded in increasing qbank usage among OMS1s, and now targeted our next objective: how can we use qbank data to advise students on test-taking skills? This abstract describes the project recently initiated to address this.
Methods
Quizzes have been created in the qbank for first-year systems-based courses. Quizzes are not graded or required, but students are encouraged to complete them. The appropriate quiz is released to the cohort at least one week prior to the upcoming exam. Students are advised to complete the quiz and a brief learning questionnaire days prior to the exam and share results with a TUN learning specialist. Quiz reports display time spent per question, initial and final answer choice selected, and whether choices were correct or incorrect. Question text and explanations are also available. We are holding sessions with learning specialists to review the information provided and discuss how this can be used to counsel students. Learning specialists review each quiz and questionnaire, documenting observations, advice and longitudinal performance. We will subsequently survey students to gauge their satisfaction with the process.
Results
This project was initiated in fall 2023 and is still in the early stages. The process shows promise in assisting students in developing improved test-taking skills so that assessment scores better reflect their topic knowledge.
Conclusions
An early lesson learned is that students most likely to benefit are those typically scoring in the "C" range or better. Students who struggle to pass exams ordinarily have significant knowledge gaps that overshadow flawed test-taking skills. We are using data we collect to continually refine and enhance advice we tailor to individual students.
Presented By: Rebecca Sullivan, Lewis Katz School of Medicine at Temple University
Co-Authors: Erin Bruce, University of Florida College of Medicine
Marisol Lopez, Boston University Chobanian & Avedisian School of Medicine
Purpose
When writing multiple-choice questions (MCQ) for assessments, it is essential to follow best practices to remove potential inequities and/or achievement gaps. Students from groups underrepresented in medicine (URiM) and/or English language learners (ELLs) are particularly impacted by flawed items. Studies show that simplifying language while maintaining content difficulty increases linguistic accessibility of test items and is helpful to ELLs. Cognitive load theory can be used as a conceptual framework to prove the impact of item flaws on student academic performance. A greater number of elements for a student to process in working memory (extraneous cognitive load) to successfully answer the question, leaves less working memory capacity (intrinsic cognitive load) available to demonstrate content knowledge. We propose a universal rubric will address flawed MCQ items in an objective manner across institutions.
Methods
We have developed an evaluation instrument (rubric) to score the presence of technical flaws that provide irrelevant difficulty and/or provide an advantage to test-wise examinees based on previous literature and the "Item-Writing Guide" from the National Board of Medical Education (NBME®).
Results
We have validated the rubric using 26 MCQs obtained from physiology exams at three different institutions in a variety of programs including Dental, Medical, and Physician Assistant programs. After analyzing the questions, we returned as a group and compared our individual ratings, clarified any discrepancies and modified the rubric accordingly. After scoring inter-rater reliability, this rubric will be used to analyze the full question banks for physiology exams at each institution.
Conclusion
The data obtained by assessing our MCQs will indicate the prevalence of item flaws in Physiology exams. We will then investigate how identified flaws affect item performance. This rubric can be used by faculty in diverse institutions to identify problematic MCQs and modify them by removing technical flaws, ensuring equitable assessments.
Presented By: Surapaneni Krishna Mohan, Panimalar Medical College Hospital & Research Institute
Introduction
In the ever-evolving realm of medical education, artificial intelligence (AI) stands as a ground-breaking advancement creating a new era of innovation and transformation. Recognising the limitations of conventional assessment strategies including issues of reliability, bias, and scalability, this study aims to illuminate how AI can potentially mitigate these challenges and contribute to the evolution of more robust and equitable evaluation methodologies in medical education.
Methods
In this qualitative study, participants were selected from different medical institutions across India, ensuring a broad representation of perspectives using purposive sampling. Semi-structured interviews and focus group discussions were conducted to comprehensively capture the multifaceted views of medical educators on the introduction of AI for assessment in Medical Education. All responses were anonymous and audio-recorded which was later transcribed. Thematic and Content analysis was employed to extract common patterns and variations in responses for deeper analysis.
Results
The findings of this study reveal diverse perspectives among medical educators regarding the integration of AI into assessment methodologies. Themes such as the personalisation of assessments, diverse assessment methods, automation of assessment, integration of speech recognition for oral assessments, and the use of gamification in assessment emerged as key themes. Participants also provided insights into the perceived validity, reliability, quality, and time-saving aspects associated with AI-driven assessment strategies. Also, concerns regarding the ethical implications of AI in assessments, potential biases in algorithms, the need for transparency in decision-making processes, and the preservation of humanistic aspects in medical education were raised.
Conclusion
This study underscores the transformative potential of AI in reshaping assessment strategies and contributes to the ongoing discussion surrounding the integration of AI in medical education. Also, as technological advancements are increasingly embraced, it is crucial to navigate the ethical considerations and ensure that these innovations align with the core values of fairness, transparency, and excellence in medical education.
Presented By: Andrew Thompson, University of Cincinnati College of Medicine
Purpose
Student learning approaches have been a topic of interest in educational research since the 1970s. Within this framework, students are typically classified as taking a surface approach (SA) or deep approach (DA). Educators often encourage a DA since it is believed this results in better learning outcomes. However, results in the literature are mixed and it is unclear what role learning approach has in shaping study habits or examination performance. The purpose of this study is to investigate this topic using several years of data from an undergraduate human anatomy course.
Methods
Data from two cohorts of students (N=70) were used in this study. Learning approach was measured using the revised two-factor Study Process Questionnaire (R-SPQ-2F). Students also completed a custom survey designed to collect data related to the study habits students utilized leading up to each of the three course examinations. Lecture examination data was the focus of this study, which consisted of primarily multiple-choice questions that were classified according to Bloom's taxonomy.
Results
Students who took a DA to learning not only spent more time studying, but tended to start studying earlier compared to students who favored a SA. While there was a weak trend for DA scores to correlate positively with examination performance and SA scores to correlate negatively, the results were not statistically significant. Improvement in course performance was most evident among students who altered their study habits to spend more time studying as these individuals showed significantly higher examination improvement compared to those who studied less, regardless of their learning approach score.
Conclusions
While learning approach has been utilized extensively in educational research, it was not a strong predictor of examination performance in this study sample. Instead, time spend studying and adaptability in study habits was more important for success.
Presented By: Joe Blumer, Medical University of South Carolina
Co-Authors: Christopher Campbell, Medical University of South Carolina, College of Medicine
Michele Knoll Watson, Medical University of South Carolina, College of Medicine
Casey O'Neill, Medical University of South Carolina
Pranav Patel, Medical University of South Carolina, College of Medicine
Carter Smith, Medical University of South Carolina, College of Medicine
Purpose
Anki is a widely used retrieval practice and spaced repetition study tool among medical students. Previous studies have reported that up to 70% of medical students use Anki to aid in their medical school studies. The aim of this study is to investigate the utilization of the Anki flashcard application as a learning tool among medical students and its correlation with academic performance. This study explores the subtleties of Anki usage, including the frequency of reviews, types of cards used, learning strategies employed, and the impact on long-term knowledge retention as an initial approach to explore its impact on preclerkship medical education.
Methods
An anonymous survey was used to collect information on student demographics, self-reported study habits, Anki usage, self-reported levels of depression, burnout, and test anxiety. Self-reported dependent variables such as MCAT scores were also collected. Dependent variables were compared to the independent variables of pre-clerkship exam scores, quartile ranking, Step 1 pass rate, and scores on the NBME Comprehensive Basic Science Exam and Comprehensive Basic Science Self-Assessments for USMLE Step 1. Data analysis was conducted to determine the correlation between the dependent and independent variables.
Results
Our data indicate that: 1) Anki use alone is not significantly associated with increased academic performance in the preclerkship medical school curriculum; 2) prior use of Anki before medical school is not associated with subsequent medical school performance; and 3) specific patterns of Anki usage, such as increased number of cards reviewed per day and a greater percentage of self-made cards, predicted increased medical school academic performance.
Conclusions
Contrary to previous reports, this study does not conclusively establish Anki use alone as markedly superior to other study methods within the medical school curriculum but does demonstrate that further analysis of how Anki is used may reveal significant improvement in academic performance.
Presented By: Anna Blenda, University of South Carolina School of Medicine Greenville
Co-Authors: Renee Chosed, University of South Carolina School of Medicine Greenville
Godwin Dogbey, Campbell University School of Osteopathic Medicine
Khalil Eldeeb, Campbell University School of Osteopathic Medicine
Russ Kolarik, Prisma Health/University of South Carolina School of Medicine Greenville
Purpose
With Step 1 exam scoring changed to pass/fail, expectations have heightened for medical students to distinguish themselves before their residency match. Increasingly, additional student characteristics, including research and scholarly activity, work experiences, and volunteering, are perceived as crucial for enhancing match likelihood. This study elucidated trends in University of South Carolina School of Medicine (USC SOM) Greenville medical student characteristics by comparing institution-specific data with NRMP national data from 2017-2023, which precedes the implementation of the Step 1 Pass/Fail system and can be used as a baseline for its further evaluation.
Methods
De-identified national and institution-specific NRMP data for the last six years (2017-2023) for matched students were analyzed and compared. Emerging trends in medical student characteristics focusing on research/scholarly activity and work and volunteering experiences related to residency match were analyzed.
Results
The NRMP data indicated a national trend toward increased research activities, work, and volunteer experiences as similarly reflected at USC SOM Greenville. Mean research experiences and publications were below national averages for all specialties combined, but substantial variations existed across individual residencies. Research experiences were above the national average for general surgery, neurology, and internal medicine (pediatrics), while research publications were lower except for diagnostic radiology. At the same time, USC SOM Greenville medical students surpassed the national average in work experiences for all residencies combined and across various individual residencies. Volunteer experiences aligned more closely with national average for all residencies but exhibited wide variability at the individual residency level.
Conclusion
Targeted strategies to enhance research engagement among medical students, acknowledging specialty-specific variations are needed at the USC SOM Greenville. As medical education adjusts to the post-Step 1 Pass/Fail era, refining interventions based on ongoing research is critical to ensure students' competitiveness in securing desirable residency placements.
Presented By: Danielle Dickey, Texas A&M University School of Medicine
Co-Authors: Gerilyn Boyle, Texas A&M University School of Medicine
Darby Dwyer, Texas A&M University School of Medicine
Jody Ping, Texas A&M University School of Medicine
Uma Reddy, Texas A&M University School of Medicine
Halil Sari, Texas A&M University School of Medicine
Purpose
Clerkship Narratives are written to include in student's final MSPEs. More emphasis is being put on MSPEs as program directors look for ways to identify candidate fit. Gender bias narratives could influence a students ability to match. In the first part of our study presented via poster at IAMSE 2023, we identified that the gender bias calculator used for recommendation letters did not accurately identify gender bias for MSPEs at our institution. We hope to identify words that are used in gender bias ways in MSPEs in order to create an effective way to identify and correct for gender bias in our clerkship narratives.
Methods
We reviewed 1 years' worth of student narratives for the core clerkship rotations which include Family Medicine, Internal Medicine, Surgery, Psychiatry, Pediatrics and Obstetrics and Gynecology. Narratives were deidentified and run through online word counters. This process has identified a full list of words in each narrative. The number of words overall and the number of each word will be used to identify words that are commonly used to identify male versus female. This data will be used to build a gender bias calculator specifically for medical education.
Results
Results are pending completion of the study but total number of words have been totaled for each narrative and overall length of narratives may show bias. Number and type of words are still be analyzed.
Conclusion
This study will help identify specific places and language where bias appears in our narratives so they can be revised to better describe students without gender bias. This data may also be used for tailored faculty development and narrative templates to be created.
Presented By: Esther Dale, University of Minnesota Medical School
Purpose
Test construction is essential for evaluating competency in medical education, yet instructors often create ad-hoc exams due to time, resources, and expertise constraints. This can lead to exams with questionable content validity, impacting the accurate assessment of curriculum robustness, learner competence, and progression decisions. Medical students advocating for fair exams call for tests constructively aligned with learning outcomes and demonstrate content validity. Constructive alignment ensures that teaching and assessment methods correspond with intended learning outcomes, while content validity refers to the extent to which an exam represents the subject matter. A fair exam requires clear instructional objectives, cognitive complexity based on Bloom's Taxonomy, and appropriate weighting. Tests that effectively represent and assess the taught content have higher content validity. Since medical courses vary widely, faculty development in medical schools does not address specific content or content sampling. This presentation involves five physiologists and two assessment specialists who will discuss test blueprints within the physiology domain, helping the audience understand their structure and role in enhancing content validity.
Methods
The presentation will showcase test blueprints from five physiology professors, highlighting their similarities and differences. Each blueprint will outline learning objectives, cognitive complexity levels, and weightings. Participants will explore various dimensions to assess concept coverage within learning objectives, emphasizing dimensions crucial for medical education and integrating basic science with clinical content.
Results
We present samples from five medical schools, demonstrating exam blueprinting to overcome common challenges. These case studies will illustrate different blueprinting approaches, dimensions used, improvements in content validity, challenges faced, solutions implemented, and innovative practices resulting from assessment blueprinting.
Conclusions
Crafting a test blueprint along selected dimensions is key to assessing learner competence in medical school courses, ensuring that tests accurately reflect basic science and clinical knowledge.