Purpose
Basic sciences are an integral part of the preclinical phase of medical school. They build a foundation of knowledge that can help students understand pathophysiology. The vast scope of fundamental medical knowledge continues to exponentially grow, making it challenging for students to focus their studies. The advancement of Artificial Intelligence (AI) and AI tools like ChatGPT has catalyzed a mode of learning that can help medical students tackle the basic medical sciences in digestible, focused doses, allowing for an efficient and individualized way of learning. Active learning modalities engage students with material, which is associated with improved critical thinking, greater analysis, and higher performance. The purpose of this study is to evaluate how utilizing AI tools has impacted foundational science exam performance.
Methods
A survey was distributed to assess AI usage among the 2028 cohort at the KKSOM at UNLV to evaluate its role in studying for preclinical foundational science exams. It gathered data on the AI tools used, their applications, and how effectively they supported exam preparation. Subjects were categorized as active or passive learners, and two-sample t-tests compared the groups' mean exam scores.
Results
Preliminary data shows no statistically significant differences in the average foundational science exam scores of those who use AI for active learning versus those who use AI for passive learning. The difference in overall performance on average between the two groups is 0.8%.
Conclusions
Preliminary data suggests that students who use AI for passive learning (summarization of text, clarification of concepts, etc.) perform as well as students who use AI for active learning (practice question completion, active teaching dialogue, etc.). This suggests that no matter how students use AI in medical school it is a viable tool for studying.