Background
Radiology is a critical yet challenging component of preclinical medical education, requiring students to synthesize complex anatomical, pathological, and imaging concepts. Large language models (LLMs), such as ChatGPT, have the potential to generate concise, high-yield study pearls tailored to radiology, bridging gaps in traditional teaching methods. This study aims to assess the feasibility of using LLMs to create radiology-specific study aids and to evaluate their perceived educational value among preclinical students and faculty. Objective: The study will explore whether LLM-generated radiology study pearls can effectively enhance preclinical learning. Feedback from medical students, preclinical educators, and clinical radiology faculty will be used to evaluate the clarity, accuracy, and relevance of the content.
Methods
Radiology topics relevant to preclinical education (e.g., imaging of pneumonia, fractures, and abdominal pathology) will be input into ChatGPT and other LLMs to generate study pearls. These pearls will undergo expert review by radiology faculty for accuracy. A survey will be distributed to three groups: preclinical students, preclinical educators, and radiology faculty. Participants will rate the LLM-generated content based on accuracy, educational utility, and clinical relevance using a Likert scale. Qualitative feedback will also be collected to refine future iterations.
Expected Outcomes
The study hypothesizes that LLM-generated radiology study pearls will be perceived as clear and useful by preclinical students and faculty, with radiology educators identifying areas for improvement. The findings are expected to guide the integration of AI-generated content into medical education.
Conclusions
While data collection is ongoing, this study aims to evaluate the potential of LLMs in generating radiology-focused study materials, offering insights into how AI tools can support medical education and bridge the gap between preclinical and clinical learning.