Writing Center Voices | Navigating the Future: AAMC’s Principles for Responsible AI Use in Medical Education

entrance to the writing center

As artificial intelligence continues to reshape healthcare, medical education must evolve to prepare future physicians for a world where AI is a routine part of clinical decision-making, diagnostics and patient engagement. The Association of American Medical Colleges (AAMC) has released a set of guiding principles to help institutions integrate AI responsibly and equitably into medical education.

These principles are not rigid rules but flexible guidelines designed to support diverse institutional contexts. They emphasize ethical use, human centered design and interdisciplinary collaboration, ensuring that AI enhances rather than replaces the core values of medical training.

  • Human Centered Focus: AI should support, not substitute, the judgment and creativity of educators and learners. Critical thinking remains essential, and AI must be used as a tool to enrich—not diminish—the human elements of care and learning.
  • Ethical and Transparent Use: Institutions must clearly disclose when and how AI is used in educational settings. Learners should be trained to communicate the role of AI in patient care, fostering trust and accountability in clinical environments.
  • Equal Access to AI: The AAMC urges institutions to invest in infrastructure and collaborate across organizations to reduce disparities in AI availability. Without intentional efforts, AI could widen existing gaps in medical education and healthcare outcomes.
  • Education and Professional Development: Faculty must be supported with training and safe spaces to explore AI tools. This includes understanding both the capabilities and limitations of AI, as well as its ethical implications.
  • Interdisciplinary Collaboration: AI education should not be siloed within medical departments. Instead, it should involve experts from computer science, ethics, sociology and other relevant fields. This approach ensures that curricula reflect the complexity of AI’s impact on society and healthcare.
  • Data Privacy Protection: Whether in admissions, assessments or clinical simulations, AI must be used in ways that respect the privacy of students, faculty and patients. Institutions must establish clear policies to safeguard sensitive information.
  • Monitoring and Evaluation: AI tools should be assessed regularly to ensure they meet educational goals and maintain safety and effectiveness. Feedback loops and data driven evaluations are essential to responsible implementation.

These principles were developed through a collaborative process involving educators, clinicians, ethicists and technologists. The current version, finalized in July 2025, reflects the input of a multidisciplinary advisory committee and was drafted with the assistance of generative AI.

As medical colleges consider how to integrate AI into their programs, the AAMC’s principles offer a thoughtful framework. They remind us that while technology may change rapidly, the core mission of medical education, to prepare compassionate, competent and ethical physicians, remains constant.

At the NEOMED Writing Center, we’re also thinking about how AI fits into writing. If you’re using GenAI tools in your work, we suggest including this statement in your submission:

“GenAI tools such as Copilot and ChatGPT (add whatever tool is used) were used for brainstorming, proofreading, editing, illustrating and citing, but the final submission represents our own work.”

The NEOMED Writing Center is available for GenAI consults. We welcome students, faculty and staff to schedule appointments to discuss ethical use, writing support, and tool selection.

Remember, the only way to ethically utilize GenAI in the classroom is if it is permitted in the classroom.

 

Read

Principles for the Responsible Use of Artificial Intelligence in and for Medical Education | AAMC

 

Share this post