Professional Development Workshops (PDWS)

This year, the Professional Development Workshops are offered across four different themes. Participants will receive a Certificate of Workshop Attendance for each workshop completed. Participants who complete two workshops within the same theme will receive a Certificate of Series Attendance listing both workshops and the overarching theme.

In addition to these four sets of themed workshops, we also have one full-day workshop sponsored by Aquifer and a single workshop from the Kern Network for Flourishing. 

Please click the "+" to learn more about each workshop. 

Theme 1: AI in Health Professions Education

Don't forget to check out the second workshop! Click "Workshop 2" for details

Name
PDWS: Large Language Models (LLMs) and Their Applications in Health Professions Education
Description

This immersive, hands-on workshop prepares participants to leverage Generative AI (GAI) tools in medical education. Through faculty-guided practice and collaborative learning, participants develop critical skills needed to evaluate, implement, and responsibly apply LLMs in their teaching practices.

This workshop is designed for medical science educators, curriculum designers, and academic leaders who have started exploring AI in their workflows. Whether you're relatively new to AI or have been using it regularly, this session will help you advance your skills in enhancing teaching, assessment, and data-informed decision-making in medical education. Ideal for those who have basic familiarity with AI tools and want to expand their capabilities – from those who have just begun experimenting to those ready to explore more advanced applications.

Key Learning Objectives:

  • Understanding LLM Capabilities and Limitations: Participants will explore the current capabilities, strengths, and limitations of LLMs, pinpointing key areas where these models excel and where challenges like biases and training data constraints emerge.
  • Practical Application and GAI Tool Evaluation: Through structured comparative analysis, participants evaluate six leading GAI tools, using established rubrics and matrices to assess their capabilities. Working in teams, participants engage in hands-on exercises. Using carefully selected use cases, participants analyze varied AI outputs to understand each model's distinct strengths and limitations. This systematic approach enables participants to make informed decisions about which tools best suit their specific instructional needs.
  • Advanced Prompting Techniques: Participants master iterative prompting strategies to enhance AI output quality and relevance, with special attention to MSE applications. This includes exploring AI agents, Custom AI Tools and emerging autonomous capabilities.
  • Critical Evaluation Framework: Using established frameworks, participants develop skills to assess AI-generated content reliability and accuracy, ensuring appropriate implementation in educational settings.
  • Ethical and Responsible Use: Throughout the workshop, participants engage with ethical considerations through targeted vignettes and real-world scenarios, examining issues such as academic integrity, data privacy, and the distinctions between open and closed AI systems.

This workshop combines pre-course preparation, hands-on practice, and post-workshop resources to support ongoing implementation. Participants should have basic experience with 1-2 GAI applications and an interest in expanding their knowledge of common GAI applications in medical education. The workshop emphasizes immediate practical application while preparing participants for emerging trends in AI development and integration.

A comprehensive resource guide and follow-up opportunities support participants in addressing implementation challenges and sharing success stories in their specific contexts.

Workshop Schedule:

  • 8:00 - 8:15 (15 min) Welcome and Workshop Overview
  • 8:15 - 8:45 (30 min) Understanding LLM Capabilities and Limitations
  • 8:45 - 9:15 (30 min) Systematic GAI Tool Evaluation
  • 9:15 - 9:30 (15 min) Coffee Break and Informal Discussion
  • 9:30 - 10:00 (30 min) Mastering Advanced Prompting
  • 10:00 - 10:30 (30 min) Critical Validation and Evaluation of AI outputs
  • 10:30 - 11:00 (30 min) Ethical and Responsible Use of AI
Dennis Bergau
KarmaSci Scientific Consulting, LLC

Orest Boyko
Roseman University of Health Sciences, College of Medicine
Professor of Clinical Sciences, Founding Member and Chief Scientific Officer, American Board of Artificial Intelligence in Radiology

Jamie Fairclough
Roseman University of Health Sciences, College of Medicine
Associate Dean, Professor, and Director of Data Science & Engineering

Alex In
Virginia Tech Carilion School of Medicine
Medical Student

Lise McCoy
New York Institute of Technology College of Osteopathic Medicine at Arkansas State University
Director, Faculty Development and Assistant Professor

Douglas McKell
Thomas Jefferson University, College of Population Health
Assistant Professor, Adjunct

Diego Niño
University of Texas at Tyler School of Medicine
Associate Professor

Amy Stone
Kirk Kerkorian School of Medicine at the University of Nevada
Assistant Professor of Medical Education

Thomas Thesen
Geisel School of Medicine at Dartmouth
Associate Professor of Medical Education

Name
PDWS: Optimizing Teaching and Assessment with AI
Description

This hands-on workshop guides participants through advanced applications of Generative AI (GAI) in teaching and assessment, emphasizing practical implementation and ethical considerations.

This workshop is designed for health professions educators (HPE), curriculum designers, and academic leaders who are interested in leveraging generative AI to enhance teaching, assessment, and data-informed decision-making while addressing the practical and ethical challenges of integrating AI into HPE. Ideal for those who have experience with AI tools or have completed introductory AI training and want to expand their expertise.

Key Learning Objectives:

  • Introduction to the Workshop and AI Vision: Participants will discuss the overarching goals of AI in HPE, reviewing and reflecting on the AI vision presented to establish a shared foundation for integrating AI in their respective programs.
  • Optimizing Teaching with GAI: Participants actively engage with GAI tools to transform and enhance their educational materials through guided, hands-on activities. Using their own course content, they explore techniques for converting traditional lectures into interactive learning experiences, developing dynamic clinical case scenarios with branching decision points, and creating multimedia resources that accommodate diverse learning styles. Practical exercises focus on analyzing content structure, applying AI-assisted transformation techniques, designing scaffolded learning pathways, and generating complementary study materials—all while maintaining educational integrity and academic rigor. Throughout these activities, participants address implementation challenges and develop practical strategies for successful integration within their specific institutional contexts.
  • Enhancing Assessment with GAI: Participants engage in practical exercises focused on designing and redesigning assessments leveraging AI, while emphasizing authenticity and academic integrity. Participants will create assessment items, develop rubrics, and deliver AI-driven feedback, ensuring that AI-generated assessments remain authentic, targeted, and ethical. Special emphasis is placed on transforming traditional assessments into authentic evaluation methods that remain relevant and reliable in an AI-enhanced learning environment.
  • Data-Informed Decision-Making and Emerging AI Applications: Participants work with sample datasets to explore advanced AI applications in learning analytics and other emerging AI applications in HPE. Through hands-on exercises using simulated qualitative and quantitative performance data, they learn to harness AI tools for data organization, visualization, and analysis to generate actionable insights for curriculum enhancement. The activities emphasize responsible data handling and the ethical use of AI for performance analysis.
  • Conclusion and Takeaways: The workshop concludes with a reflective session where participants share key insights, discuss future goals, and solidify their understanding of integrating AI into educational and assessment practices.


These workshops equip participants with practical competencies to optimize teaching, enhance assessment integrity, and use AI to support evidence-based decision-making in educational settings. This workshop provides the participants with direct experience in applying more advanced GAI applications to teaching and assessment processes using hands-on exercises to enable each person to experience different AI product functionalities, including best practices, implementation processes, and transferable uses. Workshop attendees are expected to have ongoing experience using multiple GAI applications. They may also be interested in enhancing their ability and knowledge to serve as a GAI resource faculty member in their organization.

Workshop Schedule:

  • 12:30 - 12:35 (5 min) Introduction to the Workshop and AI Vision
  • 12:35 - 1:30 (55 min) Optimizing Teaching with GAI
  • 1:30 - 1:45 (15 min) Coffee Break
  • 1:45 - 2:40 (55 min) Optimizing Assessment with GAI
  • 2:40 - 3:20 (40 min) Concurrent Sessions
    • Track A: Data Analysis in HPE
    • Track B: Emerging Topics (Content to be determined based on participant interests and emerging trends)
  • 3:20 - 3:30 (10 min) Conclusion and Takeaways
Dennis Bergau
KarmaSci Scientific Consulting, LLC

Orest Boyko
Roseman University of Health Sciences, College of Medicine
Professor of Clinical Sciences, Founding Member and Chief Scientific Officer, American Board of Artificial Intelligence in Radiology

Jamie Fairclough
Roseman University of Health Sciences, College of Medicine
Associate Dean, Professor, and Director of Data Science & Engineering

Alex In
Virginia Tech Carilion School of Medicine
Medical Student

Lise McCoy
New York Institute of Technology College of Osteopathic Medicine at Arkansas State University
Director, Faculty Development and Assistant Professor

Douglas McKell
Thomas Jefferson University, College of Population Health
Assistant Professor, Adjunct

Diego Niño
University of Texas at Tyler School of Medicine
Associate Professor

Amy Stone
Kirk Kerkorian School of Medicine at the University of Nevada
Assistant Professor of Medical Education

Thomas Thesen
Geisel School of Medicine at Dartmouth
Associate Professor of Medical Education

Claudio Violato
University of Minnesota Medical School
Professor and Assistant Dean

Don't forget to check out the second workshop! Click "Workshop 2" for details

Name
PDWS: Formative Assessment for Preclinical Years
Description

The issue of the level of difficulty in assessing first-year medical students continues to be a point of contention between the faculty and the preclinical administration. Some medical schools still do not know when to introduce vignette-style questions to students. We need to reach a consensus among faculty members regarding the level of assessment and the importance of formative assessment in student preparedness.

General objectives:

  • Understand the general format of items for formative assessment.
  • Be able to write item types
  • Interpret the item analysis from formative assessment results and use them in the pre-summative review.

Workshop Schedule:

Design the Table of Specifications (TOS) for both the course learning objectives and the formative assessments

  • 8:00 - 8:20 (20 min) Presenter didactics on Bloom’s taxonomy of learning objectives and assessment methods.
  • 8:20 - 9:00 (40 min) Participant work to develop a TOS for their course - write at least 1 learning objective at the application level and 1 assessment item

Formative assessment and ongoing learning

  • 9:00 - 9:20 (20 min) Didactics from presenters. Designing and writing various item types (i.e. levels 1, 2, 3 of USMLE items
  • 9:20 - 10:00 (40 min) Dynamic: Participants write item types and receive feedback

Validating items with AI

  • 10:00 - 10:20 (20 min) Presenter didactics on how to conduct item analyses and interpret validity with AI
  • 10:20 - 11:00 (40 min) Dynamic: Participants interpret item analyses results and apply AI to
Amina Sadik
Touro College of Osteopathic Medicine
Associate Professor and Project Director

Claudio Violato
University of Minnesota Medical School
Professor and Assistant Dean

Name
PDWS: Constructing Quality Questions for Summative Assessments
Description

Summative assessments often lead students to focus on rote memorization rather than deep understanding. Frequently they are not designed to provide detailed feedback for improvement. That is why faculty should emphasize that rather to pursue approving standardized exams, it should also align with the objectives of the courses taught and also promote effective strategies to comprehend complex medical subjects.

General objectives:

  • Introduce summative assessments in pre-clerkship years such as single-best answer multiple choice, short answer, and essay-type questions.
  • Focus on the purpose and structure of summative MCQ assessments.
  • Promote techniques for constructing quality assessments for continuous improvement of the summative items.


Workshop Schedule:

  • 12:30 - 1:30 (1 hour) Aligning course objectives and NBME/NBOME topics to elaborate MCQ items
    • Establish a common ground between course objectives and assessments using NBME/NBOME format.
    • Design multiple-choice questions (MCQs) that foster reasoning over memorization
  • 1:30 - 2:30 (1 hour) Purpose of summative assessment in learner progression
    • Identify how institutions are using summative assess to gauge learning outcomes
    • Identify how summative assessment help to track learner progress
  • 2:30 - 3:30 (1 hour) Critique basic lower-level questions and transform those questions to high-order question items
    • Evaluate quality of the MCQ item.
    • Criteria of a high-order question
Belinda Del Carmen Carrion Chavarria
Tecnologico de Monterrey
Adjunct Researcher, Neurological Sciences and Neurorestoration

Holly West
University of Texas Medical Branch
Director, Educational Mentoring Programs

Don't forget to check out the second workshop! Click "Workshop 2" for details

Name
PDWS: Mastering Program Evaluation: Empowering Educators for Success
Description

This 3-hour workshop is designed to equip health professions educators with the knowledge and tools needed to effectively evaluate their educational programs. By exploring the fundamental concepts of evaluation, participants will gain a deep understanding of its significance, and the various methods employed.

The workshop will focus on the evaluation of teaching including evaluating sessions, courses, and program rather than the assessment of learners. The workshop will delve into the ethical considerations and the role of evaluations in scholarly pursuits. Participants will learn to differentiate between formative and summative evaluations, qualitative, quantitative, and mixed methods, and various evaluation tools and designs. Through a detailed examination of renowned evaluation models such as ADDIE, Kirkpatrick, CIPP, and the Logic Model, participants will gain insights into their commonalities and potential limitations.

The workshop will conclude by guiding participants through the evaluation cycle, emphasizing the importance of defining evaluation goals, planning data collection and analysis, interpreting results, and utilizing findings to refine future programs. Practical tools and techniques will be shared to facilitate successful evaluation implementation.

Overall Goal:
To enhance the evaluation skills of health professions educators, enabling them to design and implement effective evaluations that improve the quality and impact of their educational programs.

Objectives:
By the end of the workshop, participants will be able to

  • articulate the definition of evaluation and discuss its ethical implications in the context of health professions education.
  • explain the steps involved in the evaluation cycle.
  • compare and contrast the strengths and weaknesses of the four evaluation models.
  • develop an evaluation plan, including data collection, analysis, and reporting strategies.

Workshop Schedule:

  • 8:00 - 8:25 (25 min) Part 1: Introduction to Evaluations
    • Introductions, session goals, agenda, and distribution of resources
    • Warm-up and discussion: What is evaluation?
    • Personal experiences with educational evaluations as a learner and educator, challenges and opportunities.
    • Ethical considerations in evaluation
    • Evaluation as Scholarship – review of Boyer’s 4 Types of Scholarship and the role of evaluation in each
  • 8:25 - 8:55 (20 min) Part 2: Overview of Evaluation Methods
    • Formative vs. Summative
    • Qualitative, quantitative, and mixed methods
    • Evaluation tools
    • Common evaluation designs
    • Overview of the evaluation cycle: define, plan, implement, inform > refine and repeat
  • 8:55 - 9:40 (45 min) Part 3: Exploring Evaluation Models
    • ADDIE Model – how to utilize for evaluations
    • Kirkpatrick’s Model and Kirkpatrick's New World Model
    • CIPP Model
    • Logic Model
    • What do the models have in common? What is missing?
  • 9:40 - 11:00 (80 min) Part 4: Planning your Evaluation Journey
    • Starting with evaluation cycle: define, plan, implement, inform > refine and repeat
    • Define what you want to evaluate. What is the program, what does it purport to accomplish, what are its goals and objectives, what are the strategies and activities? Do the activities and goals align?
    • Plan what you want to evaluate: what questions should the evaluation answer? What are the best indicators to address the objectives? What methods should be used? What is the strongest design that can be implemented?
    • Plan how you want to evaluate the program: how should data be collected, organized, maintained, analyzed to best answer the evaluation questions? How are you going to implement the evaluation?
    • Plan how to interpret the results: how should results be interpreted? did the program meet its goals? How will you communicate the results? How do you ensure the results are used?
    • Refine and repeat.
    • Tools and techniques for success
    • Questions, Answers and Wrap-up
Anna Blenda
University of South Carolina School of Medicine Greenville
Professor of Biomedical Science

Kathleen Everling
University of Texas Medical Branch
Senior Medical Educator

Name
PDWS: Employing Logic Models to Achieve Educational Program Success
Description

Program evaluation is the systematic assessment of educational initiatives, policies, programs, or interventions to determine their effectiveness, efficiency, relevance, and impact on trainees, educators, and communities. It involves collecting and analyzing data to provide evidence-based feedback on the strengths and weaknesses of the program, progress toward achieving intended outcomes, and recommendations for improvement or modification. A strategic approach to evaluation is essential to achieve the long-term success and sustainability of programs. One such approach is to develop a logic model. Logic models visually connect intended outcomes with the resources and activities that go into a program, depicting relationships among program inputs, planned work, measurable outputs or products, and short and long-term outcomes or impacts for students or other learners. A well-constructed logic model can guide decision-making about data collection, assessment timepoints and overall program evaluation, supporting clear alignment between these measures and the desired outcomes. In this workshop, an experienced investigation and program evaluation team will guide participants through the elements and development process of a logic model, identify connections between the logic model and formative and summative evaluation components, and provide templates, strategies, and practice to foster excellence in their educational programs. Workshop

The workshop emphasizes the importance of program evaluation for the long-term success and sustainability of educational programs. As a result of participating in this session, participants will be able to:

  • describe the elements of a logic model, and
  • apply a logic model to refine evaluation and assessment strategies in the context of their own programs.


Who Should Attend
The workshop will be relevant for a wide range of conference participants who are involved in program design and are either familiar or less familiar with program evaluation.

Workshop Schedule:

  • 12:30 - 12:40 (10 min) Introductions, session goals, agenda, and distribution of resources
  • 12:40 - 12:50 (10 min) Session Warm-Up: and think pair share discussion of questions and experiences related to planning and evaluating an educational program.
  • 12:50 - 1:10 (20 min) Overview of a logic model, with examples.
  • 1:10 - 1:20 (10 min) Guided demonstration of using a logic model, starting with development of measurable outcomes, and showing the importance of alignment across components of a program.
  • 1:20 - 2:20 (60 min) Individual and Small Group Activities: Individuals will select a project that is relevant to their work and use a logic model to identify outcomes and impacts, plan program activities and outputs, and identify resources and other “inputs”. Between each step, mentored small groups will discuss and troubleshoot challenges. Session leaders will act as program mentors.
  • 2:20 - 2:40 (20 min) Large Group Activity: Presentation and discussion of program evaluations created.
  • 2:40 - 2:50 (10 min) Overview of measuring outcomes
  • 2:50 - 3:10 (20 min) Individual and Small Group Activities: Individuals begin to create formative and summative assessments of their outputs and outcomes. Mentored small groups will discuss and troubleshoot challenges. Session leaders will act as program mentors.
  • 3:10 - 3:20 (10 min) Large Group Activity: Presentation and discussion of assessment tools.
  • 3:20 - 3:30 (10 min) Group Discussion and Questions: What were the most challenging aspects of implementing a program evaluation? How might you use a program evaluation?
Kimberly Dahlman
Vanderbilt University Medical Center
Associate Professor of Medicine

Nancy Moreno
Baylor College of Medicine
Professor and Chair of the Huffington Department of Education, Innovation & Technology

Alana Newell
Baylor College of Medicine
Assistant Professor of Education, Huffington Department of Education, Innovation & Technology

Neil Osheroff
Vanderbilt University School of Medicine
Professor of Biochemistry & Medicine

Don't forget to check out the second workshop! Click "Workshop 2" for details

Name
PDWS: Essential Skills for Quantitative Inquiry and Research in Education (ESQuIRE)
Description

Objectives of the workshop:

  • Participants will identify and describe how to apply key components of a rigorous quantitative research study (e.g., research paradigm, theoretical frameworks).
  • Participants will explain the components of a quantitative research study that align with typical critical appraisal tools, including: the research question, study design, analytic approach, explanation of findings, and appropriateness of conclusions.
  • Participants will apply and interpret basic descriptive and inferential statistics.
  • This basic skills workshop for health professions educators introduces key considerations for quantitative research, guides critical appraisal of existing studies, and provides hands-on practice in applying and interpreting common statistical analyses.
Peter Boedeker
Baylor College of Medicine
Assistant Professor, Department of Education, Innovation and Technology

Ryan Mutcheson
Virginia Tech Carilion School of Medicine
Assistant Dean, Assessment and Program Evaluation

Alana Newell
Baylor College of Medicine
Assistant Professor of Education, Huffington Department of Education, Innovation & Technology

Name
PDWS: Mixed-Methods in Health Science Educational Research
Description

Participants will be provided with definitions of the different research methods used in medical education, the type of tools involved in collecting data, their analysis and possible interpretations. They will also be given examples of qualitative and mixed research methods that may be compatible with developing and implementing a research project at their own institution.

Workshop Schedule:

  1. Introductions & Audience Perspectives on Qualitative Research vs. Quantitative research in general – 15 minutes
  2. In-depth qualitative research methods and analysis – 60 minutes
    • Differentiate between different types of qualitative methods.
    • Identify the steps necessary to organize and manage data collection.
    • Describe the qualitative data coding process.
    • Discuss the different considerations that may determine the credibility of the data analysis process.
    • Explain the importance of checking and challenging qualitative data analysis.
    • Outline what must be considered before qualitative data analysis can be accurately represented and appropriate conclusions can be reached.
  3. In-depth analysis with Mixed methods (includes an AI aspect) – 60 minutes
    • Describe mixed-method approaches such as Q-Sort, Nominal Group Technique, Delphi Technique, Focus Groups
    • Discuss the strengths and weaknesses of using a mixed method approach in research
    • Use a tool to help quantify qualitative data in a mixed methods approach
    • Design an appropriate mixed method approach for their home institution
  4. Comparison between Quantitative and Qualitative research (Table) – 30 min
  5. List of topics that can be suitable for qualitative research and discussion on their feasibility with the audience – 15 minutes
Sateesh Babu Arja
Avalon University School of Medicine
Executive Dean

Machelle Linsenmeyer
West Virginia School of Osteopathic Medicine
Assistant Vice President and Professor

Amina Sadik
Touro College of Osteopathic Medicine
Associate Professor and Project Director

Please note that this is one workshop that is split into two sessions. The description for Part 2 matches the description of Part 1. 

Name
PDWS: Integrating Basic Science and Clinical Medicine: From Curriculum to Classroom to Learner Assessment Part 1
Description

This workshop is sponsored by Aquifer.

This 6-hour, faculty development course will begin with an interactive group discussion to identify key barriers to designing and implementing curricula that promote effective cognitive integration and transfer in the classroom. This course will immerse participants in the Aquifer Sciences Curriculum, a freely available curriculum collaboratively created by IAMSE and Aquifer and integrating basic science and clinical medicine. Following a brief didactic on concept-based learning and knowledge organization, cognitive integration, and collaborative teaching, participants will be formally introduced to the Aquifer Sciences Curriculum Database and will explore its organizational structure and the pedagogical approach of the Integrated Learning Objective (ILO) to support cognitive integration. The core concepts that make up the backbone of this Curriculum define the endpoint of necessary basic science understanding to permit a clinical trainee to justify safe and effective clinical decisions. Participants will work in small groups to consider how the core concepts of the Curriculum can facilitate backwards curriculum design and will modify existing classroom experiences to take advantage of the approach to cognitive integration promoted by the Integrated Learning Objective approach.

Participants will then consider the use of Integrated Illness Scripts and Mechanism of Disease maps to develop classroom activities that promote cognitive integration. Participants will actively engage in an Integrated Learning Sessions (ILS) for a pre-clerkship course and will outline their own session using a template designed to compare and contrast the underlying concepts and mechanisms that underpin a variety of clinical features in related and disparate clinical conditions to inform safe clinical decision-making. Participants will then practice using these tools in the design of learning activities and assessments within their own curricular frameworks and across various educational settings (e.g., longitudinal clerkships, clerkship didactics, bedside teaching, active classrooms, faculty development). Concrete examples of the use of the tools and assessments across several schools and various educational settings will be provided. Finally, the value of the considered pedagogical and assessment approaches to foster development of the Master Adaptive Learner will be evaluated.

Summary of Workshop to include:

  • Purpose
  • Objectives
  • Timeliness and significance to the field
  • Workshop description of how the objectives will be met, teaching methods for educational activities, and the timeline for the session.
  • Outcomes - What specific skills will attendees acquire?
  • Summary of presenters' qualifications and expertise in the topic.
Youngjin Cho
Geisinger Commonwealth School of Medicine
Associate Professor of Immunology

Eve Gallman
AU/UGA Medical Partnership
Associate Professor

Julie Kerry
Eastern Virginia Medical School
Professor and Chair of Microbiology and Molecular Biology

Kimberly Miller
Geisinger Commonwealth School of Medicine
Assistant Professor of Pharmacology and Microbiology

Khiet Ngo
University of California, Riverside
Associate Dean for Clinical Skills Education & Innovation

Please note there is only one workshop in this series. This workshop is scheduled during the second block of
Professional Development Workshops, which takes place from 12:30 - 3:30 PM MDT. 

Name
PDWS: Human Flourishing Guided Medical Education: Strategies to Thrive in Medical School and Life
Description

Please note there is no Certificate of Series Attendance for this workshop. There will be a Certificate of Workshop Attendance.

The role of medical education is to develop knowledge, skills and professional formation in both its learners and faculty. Human flourishing guided medical education provides a guiding framework and overall objective aimed at developing learner and faculty excellence in knowledge, skills and the essential aspects of formation. This session will overview key concepts of human flourishing guided education, forward key human flourishing frameworks followed by current examples of flourishing guided curricular initiatives to demonstrate how they have operationalized “Human Flourishing” with curricular interventions or strategic experiences. Participants will then be guided through a series of worksheets to develop flourishing guided curricular initiatives relevant to their area of expertise. The goals will be to develop curricular approaches that will promote medical student and faculty thriving by addressing domains such as happiness/life satisfaction, physical/mental health, character/virtue meaning/purpose, and close social relationships. This session will also explore the inter-relationship of caring, decision making, family support, work/learning environments, and community connections in supporting flourishing.

The session will also focus on the roles of faculty as well as potential barriers impeding the ability to flourish as students, residents, physicians, researchers, and educators.
The session will also invite the attendees to consider their own state of flourishing using the Human Flourishing Index (a validated instrument), in addition to gaining new insights and tools to improve their learning environment to promote flourishing and a renewed sense of purpose, meaning and connection.

Kamna Balhara
Johns Hopkins Medicine International
Associate Professor

Jeffery Fritz
Kern National Network for Caring & Character in Medicine
Associate Director and Associate Professor

Aviad Haramati
Georgetown University
Professor and Director, CENTILE

William Pearson
Edward Via College of Osteopathic Medicine
Chair of Anatomy

Daniel Zheng
Johns Hopkins Medicine
Medical Student