Name
Adapting Faculty Needs into AI Chatbots
Authors

Justin Student, University of British Columbia
Duncan Hamilton, University of British Columbia

Date & Time
Wednesday, October 22, 2025, 1:30 PM - 1:44 PM
Presentation Category
AI & Technology
Description

Purpose
To demonstrate a practical framework for turning faculty‑defined learning objectives into a pedagogically sound chatbot that meets learner preferences and effectively leveraging AI with a technical team.

Methods
Three instructional faculty were interviewed to specify: (1) desired communication skills, (2) diagnostic reasoning checkpoints, and (3) feedback criteria. Their objectives were distilled into (a) a grading rubric, (b) behaviour guardrails, and (c) system prompts for two agents (patient and preceptor). Over 170 conversation logs were reviewed, and then qualitative interviews were done with six students in semi-structured sessions.

Results
Faculty consultation produced 12 guardrails and a 14‑item rubric aligned with CanMEDS communicator and medical expert roles. All six students completed the case (mean interaction time = 11 min). Qualitative analysis revealed three themes:

Authenticity – Voice interaction increased immersion and trust; students rated dialogue “more believable than ChatGPT.”
Objective Alignment – Students appreciated immediate, rubric‑based feedback on open‑ended questioning and DDx formulation.
Terminology Mismatch – Learners expected lay language from the patient; faculty had embedded medical jargon for knowledge reinforcement, prompting rapid prompt iteration.

Conclusion
A tight, iterative loop among instructional design, technical implementation, and faculty intent can yield an AI chatbot that reliably enacts specific learning objectives while preserving educational authenticity. Our co-design framework —guardrails → prompt engineering • rapid student feedback —offers a scalable template for global educators seeking to integrate AI agents into health-professions curricula.