Advancements of artificial intelligence in assessing clinical pharmacology and therapeutics in medical education, a systematic review


Donker E. M., Sans-Pola C., Arellano L., Joanjus E., Gerard A., Sel E. K., ...Daha Fazla

17th Congress of the European Association for Clinical Pharmacology and Therapeutics (EACPT), Helsinki, Finlandiya, 28 Haziran - 01 Temmuz 2025, cilt.81, sa.213, ss.19, (Özet Bildiri)

  • Yayın Türü: Bildiri / Özet Bildiri
  • Cilt numarası: 81
  • Basıldığı Şehir: Helsinki
  • Basıldığı Ülke: Finlandiya
  • Sayfa Sayıları: ss.19
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

Background: Artificial intelligence (AI) is increasingly used in teaching and training in clinical pharmacology and therapeutics (CPT). However, concerns remain about its reliability for both students and teachers. Since assessments are key to evaluating students' competence in CPT, we conducted this systematic review to investigate how AI is used for assessments and to evaluate its performance on CPT assessments. Methods: We searched PubMed using the query: “pharmacology” AND “teaching” AND “artificial intelligence”. We included English-language studies published between January 1, 2020, and January 16, 2025, that focused on integrating AI into CPT assessments. We extracted data on AI tool types, how they are used, assessment outcomes if applicable, and study methods, following the PRISMA 2020 statement Results: After screening 822 records, we included 12 original studies. Nine papers evaluated AI tools using existing CPT questions in real assessments (6 in medicine, 2 in dentistry, and 1 in pharmacy). All studies evaluated ChatGPT (versions 3.5 and/or 4). Other evaluated tools were Poe Assistant, Sage Poe, Gemini, Claude-Instant, Llama 2 and Copilot. Four studies showed that ChatGPT 4 answered CPT questions at a level at least comparable to medical students, and three studies found it more accurate than version 3.5. One study reported that Copilot had similar accuracy to ChatGPT 4, while another noted that an unspecified version of ChatGPT did not succeed in a neurology assessment. Three studies demonstrated that AI can generate exam questions and learning objectives of acceptable quality, but without 100% accuracy. Conclusion: AI shows promise for use in CPT assessment and learning. However, because these tools are not fully accurate, teachers and students should carefully verify results. Future research should explore AI applications as these tools continue to improve.