The Use of Readability Formulas in Polytrauma
DOI:
https://doi.org/10.5281/zenodo.17045168Keywords:
Keywords: Polytrauma; ChatGPT; Gemini; Flesch-Kincaid Grade Level; readabilityAbstract
Abstract
Aim: Polytrauma is defined as a condition in which an individual sustains multiple traumatic injuries, often involving multiple organ systems. We conducted this study because there are no published reports demonstrating the readability of AI models’ responses to patient-posed questions about polytrauma. Therefore, we aimed to compare indices measuring the readability of ChatGPT and Gemini responses to questions about polytrauma.
Materials and Methods: Questions posed by patients and their families regarding polytrauma were compiled online and evaluated by three trauma-related emergency medicine specialists, yielding 40 questions. These inquiries were then directed to ChatGPT-5 and Gemini-2.5 Pro, and the responses were documented. Before posing the inquiries, both Gemini and ChatGPT were instructed to generate responses suitable for patients or their families with no medical background. Readability was measured using the Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Fog Scale (Gunning FOG Index), SMOG Index, Automated Readability Index (ARI), Coleman–Liau Index, Linsear Write Formula, Dale–Chall Readability Score, and Spache Readability Formula. Scores were then compared analytically using SPSS software.
Results: For most formulas—including ARI, Flesch Reading Ease, Fog Scale (Gunning FOG Index), Coleman–Liau Index, SMOG Index, Linsear Write Formula, Dale–Chall Readability Score, and Spache Readability Formula—the p-values were >0.05, indicating no statistically significant difference between Gemini and ChatGPT. For example, Flesch Reading Ease scores were 48.64±4.72 for Gemini and 42.26±6.08 for ChatGPT (p=0.372). ARI scores were 10.36±1.96 for Gemini and 11.23±1.86 for ChatGPT (p=0.472).
Conclusion: While both AI models produce text with similar readability across many common metrics, ChatGPT’s output tends to be more complex, requiring a higher reading level as indicated by the Flesch-Kincaid Grade Level, compared with Gemini
References
Turculeţ CŞ, Georgescu TF, Iordache F, Ene D, Gaşpar B, Beuran M. Polytrauma: The European Paradigm. Chirurgia (Bucur). 2021 Dec;116(6):664-668. doi: 10.21614/chirurgia.116.6.664.
Spitler CA, Hulick RM, Graves ML, Russell GV, Bergin PF. Obesity in the Polytrauma Patient. Orthop Clin North Am. 2018 Jul;49(3):307-315. doi: 10.1016/j.ocl.2018.02.004.
Pape HC, Moore EE, McKinley T, Sauaia A. Pathophysiology in patients with polytrauma. Injury. 2022 Jul;53(7):2400-2412. doi: 10.1016/j.injury.2022.04.009.
Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017 Apr;69S:S36-S40. doi: 10.1016/j.metabol.2017.01.011
Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol. 2019 Apr;28(2):73-81. doi: 10.1080/13645706.2019.1575882
Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial Intelligence in Anesthesiology: Current Techniques, Clinical Applications, and Limitations. Anesthesiology. 2020 Feb;132(2):379-394. doi: 10.1097/ALN.0000000000002960.
Bedel HA, Bedel C, Selvi F, Zortuk Ö, Karancı Y. Emergency Medicine Assistants in the Field of Toxicology, Comparison of ChatGPT-3.5 and GEMINI Artificial Intelligence Systems. Acta Med Litu. 2024;31(2):294-301. doi: 10.15388/Amed.2024.31.2.18.
Hoppe JM, Auer MK, Strüven A, Massberg S, Stremmel C. ChatGPT With GPT-4 Outperforms Emergency Department Physicians in Diagnostic Accuracy: Retrospective Analysis. J Med Internet Res. 2024 Jul 8;26:e56110. doi: 10.2196/56110.
Paslı S, Şahin AS, Beşer MF, Topçuoğlu H, Yadigaroğlu M, İmamoğlu M. Assessing the precision of artificial intelligence in ED triage decisions: Insights from a study with ChatGPT. Am J Emerg Med. 2024 Apr;78:170-175. doi: 10.1016/j.ajem.2024.01.037.
Gibson D, Jackson S, Shanmugasundaram R, Seth I, Siu A, Ahmadi N, Kam J, Mehan N, Thanigasalam R, Jeffery N, Patel MI, Leslie S. Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment. J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
Wong K, Levi JR. Partial Tonsillectomy. Ann Otol Rhinol Laryngol. 2017 Mar;126(3):192-198. doi: 10.1177/0003489416681583.
Zheng J, Yu H. Readability Formulas and User Perceptions of Electronic Health Records Difficulty: A Corpus Study. J Med Internet Res. 2017 Mar 2;19(3):e59. doi: 10.2196/jmir.6962.
White A, Danis M. Enhancing patient-centered communication and collaboration by using the electronic health record in the examination room. J Am Med Assoc. 2013 Jun 12;309(22):2327–8. doi: 10.1001/jama.2013.6030.
Delbanco T, Walker J, Bell SK, Darer JD, Elmore JG, Farag N, Feldman HJ, Mejilla R, Ngo L, Ralston JD, Ross SE, Trivedi N, Vodicka E, Leveille SG. Inviting patients to read their doctors' notes: a quasi-experimental study and a look ahead. Ann Intern Med. 2012 Oct 2;157(7):461–70. doi: 10.7326/0003-4819-157-7-201210020-00002.
Wiljer D, Bogomilsky S, Catton P, Murray C, Stewart J, Minden M. Getting results for hematology patients through access to the electronic health record. Can Oncol Nurs J. 2006;16(3):154–64. doi: 10.5737/1181912x163154158.
Tang PC, Lansky D. The missing link: bridging the patient-provider health information gap. Health Aff (Millwood) 2005;24(5):1290–5. doi: 10.1377/hlthaff.24.5.1290.
Keselman A, Slaughter L, Smith CA, Kim H, Divita G, Browne A, Tsai C, Zeng-Treitler Q. Towards consumer-friendly PHRs: patients' experience with reviewing their health records. AMIA Annu Symp Proc. 2007:399–403. http://europepmc.org/abstract/MED/18693866 .
.Keselman A, Smith CA. A classification of errors in lay comprehension of medical documents. J Biomed Inform. 2012 Dec;45(6):1151–63. doi: 10.1016/j.jbi.2012.07.012.
Pyper C, Amery J, Watson M, Crook C. Patients' experiences when accessing their on-line electronic patient records in primary care. Br J Gen Pract. 2004 Jan;54(498):38–43.
11.Mák G, Smith FH, Leaver C, Hagens S, Zelmer J. The effects of web-based patient access to laboratory results in british columbia: a patient survey on comprehension and anxiety. J Med Internet Res. 2015;17(8):e191. doi: 10.2196/jmir.4350.
Chapman K, Abraham C, Jenkins V, Fallowfield L. Lay understanding of terms used in cancer consultations. Psychooncology. 2003 Sep;12(6):557–66. doi: 10.1002/pon.673.
Lerner EB, Jehle DV, Janicke DM, Moscati RM. Medical communication: do our patients understand? Am J Emerg Med. 2000 Nov;18(7):764–6. doi: 10.1053/ajem.2000.18040.S0735-6757(00)39827-8.
14.Zeng QT, Tse T, Divita G, Keselman A, Crowell J, Browne AC, Goryachev S, Ngo L. Term identification methods for consumer health vocabulary development. J Med Internet Res. 2007 Feb 28;9(1):e4. doi: 10.2196/jmir.9.1.e4. http://www.jmir.org/2007/1/e4/ v9i1e4
Zielstorff RD. Controlled vocabularies for consumer health. J Biomed Inform. 2003;36(4-5):326–33. doi: 10.1016/j.jbi.2003.09.015.
Patrick TB, Monga HK, Sievert ME, Houston HJ, Longo DR. Evaluation of controlled vocabulary resources for development of a consumer entry vocabulary for diabetes. J Med Internet Res. 2001;3(3):E24. doi: 10.2196/jmir.3.3.e24.
Boles CD, Liu Y, November-Rider D. Readability levels of dental patient education brochures. J Dent Hyg. 2016 Feb;90(1):28–34.90/1/28.
Huang G, Fang CH, Agarwal N, Bhagat N, Eloy JA, Langer PD. Assessment of online patient education materials from major ophthalmologic associations. JAMA Ophthalmol. 2015 Apr;133(4):449–54. doi: 10.1001/jamaophthalmol.2014.6104.2107258
20.Grossman SA, Piantadosi S, Covahey C. Are informed consent forms that describe clinical oncology research protocols readable by most patients and their families? J Clin Oncol. 1994 Oct;12(10):2211–5. doi: 10.1200/jco.1994.12.10.2211.
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Salih Denis Şimşek, Can Şensöğüt

This work is licensed under a Creative Commons Attribution 4.0 International License.