AI-based assessment of abstract quality in otorhinolaryngology journals
Kadir Şinasi Bulut1, Fatih Gül2
1Ankara Yıldırım Beyazıt Üniversitesi Kulak Burun Boğaz Anabilim Dalı, Ankara, Türkiye
2Lokman Hekim Üniversitesi, Kulak Burun Boğaz Anabilim Dalı, Ankara, Türkiye
Keywords: Abstracts, artificial intelligence, otorhinolaryngologic diseases.
Abstract
OBJECTIVE: This study aims to analyze 2024 otorhinolaryngology journal abstracts indexed in the Web of Science (WoS) using an artificial intelligence (AI)-based structured rubric (ChatGPT) to assess quality and explore associations with journal metrics.
METHODS: A methodological analysis was conducted on 515 comparative-study abstracts from 66 WoS-indexed journals. Each abstract was evaluated by an AI language model (ChatGPT-5, OpenAI) using a 10-item rubric derived from international reporting standards, scoring 0-100 across originality, aim, design, methods, statistics, results, interpretation, flow, and impact. Journal metrics (SCI/ESCI, quartile, Journal Citation Indicator [JCI]) were retrieved from the WoS database.
RESULTS: The mean total quality score was 75.3±7.6 (range, 50 to 94). Highest scores were for clarity of aim and results (91.0±5.6%), while lowest were for study design and sample size. Abstracts in SCI journals (76.0±7.6) scored higher than ESCI (70.2±5.1, p<0.001). Higher quality was also associated with Q1-2 journals and JCI >1 (p<0.001 for both). Quartile ranking showed the highest predictive value (area under the curve [AUC] =0.76).
CONCLUSION: Abstract quality in otorhinolaryngology journals is variable but correlates positively with journal impact. AI-based evaluation offers an objective, efficient approach to assess scientific reporting quality.