Number
718
Name
Student Use of a Custom Chatbot for Physiology in Undergraduate Medical Education
Date & Time
Sunday, June 15, 2025, 5:30 PM - 7:00 PM
Location Name
Exhibition Hall C
Presentation Topic(s)
Technology and Innovation
Description

Purpose
Large language models (LLMs) like ChatGPT offer new educational tools, but concerns exist about intellectual property protection and students' lack of prompt engineering skills. This project aimed to characterize how preclinical osteopathic medical students interact with a custom LLM, designed with guardrails and specific prompts to augment medical physiology lectures.

Methods
Two ChatGPT-4-based tools were developed on a commercial platform to assist physiology learning. The first tool generated resources—simple multiple-choice questions (MCQs), board-style questions, clinical summaries, or concept maps—based on specific lectures or learning objectives. The second was a "Socratic" chatbot designed to engage students in exploring their knowledge rather than directly providing content. These tools were offered to OMS-I and OMS-II students from September to November 2024, with usage and interactions tracked.

Results
106 of 330 students (33%) signed up, and 56 (18%) actively used the chatbots, totaling 333 interactions. Active users averaged 5.9 prompts each (range 1-35, SD?=?7). Usage of the resource-generating bot and the Socratic bot was similar (66 vs. 70 interactions). In the resource bot, requests were for simple MCQs (41%), board-style questions (38%), and summaries or concept maps (11%). Fifty-one percent of topics were related to physiology, and 30% of interactions included learning objectives. In the Socratic bot, 55% of interactions were deemed Socratic. The quality and relevance of the output were inconsistent.

Conclusions
Student use of custom chatbots for undergraduate medical physiology was limited, with significant interactions outside the intended discipline and inquiry method. Further research is needed to understand why students used the tools the way they did, ways to improve the tools, and how these tools fit, or don’t, within their current use of other LLM tools.