ChatGPT vs. Dental Students: Bone Lesion Knowledge Comparison


Creative Commons License

Öztürk K., Akkoca F., İlhan G.

INTERNATIONAL DENTAL JOURNAL, cilt.74, ss.29, 2024 (SCI-Expanded)

  • Yayın Türü: Makale / Kısa Makale
  • Cilt numarası: 74
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1016/j.identj.2024.07.656
  • Dergi Adı: INTERNATIONAL DENTAL JOURNAL
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, CAB Abstracts, CINAHL, EMBASE, Directory of Open Access Journals
  • Sayfa Sayıları: ss.29
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

AIM or PURPOSE: The utilization of artificial intelligence (AI) in dentistry to improve treatment outcomes and aid clinical decision-making processes is increasing. ChatGPT, an AIbased chat robot, has gained attention for its ability to generate human-like responses and academic content. Previous studies have investigated ChatGPT’s effectiveness in creating radiology reports and addressing patient inquiries in various dental fields. However, research on its potential to enhance dental students’ understanding of intraosseous lesions is lacking. This study aims to compare the knowledge capacity of ChatGPT versions on intraosseous lesions with fourth-year dental students.

MATERIALS and METHOD: Fourth-year dental students underwent an 8-week training period, followed by the administration of multiple-choice questions of varying difficulty

levels. Questions were formulated based on reference books, with each question valued at 3 points. Additionally, ChatGPT

3.5 and 4 versions were presented with the same questions for 14 days, and their responses were evaluated. Descriptive statistics and one-way ANOVA test (post hoc Fisher LSD test) were used in the statistical analysis of the data. In statistical analysis, p values <0.05 were considered significant.

RESULTS: Mean total scores were 70.9§10.2 for students,64.3§2.3 for ChatGPT 3.5, and 78.9§4 for ChatGPT 4.0. Statistically significant differences were found when comparing responses between groups (p<0.001). Significant differences were also observed in correct answers to difficult and easy questions among the groups (p<0.001).

CONCLUSION(S): While AI shows promise as an educational tool, this study emphasizes its insufficiency as a standalone resource. Further research is needed to explore the full potential of AI in dental education.