Readability Assessment of Large Language Model Responses to Common Postnatal Questions Among Migrant Parents: A Comparative Analysis of ChatGPT and Gemini
DOI:
https://doi.org/10.5281/zenodo.17670372Keywords:
Migrant health, Large language models, Postnatal careAbstract
Language and communication barriers in migrant communities can substantially hinder access to health information, particularly regarding frequently asked questions in the neonatal period. Large language models (LLMs) have emerged as potential tools to address this information gap; however, the readability of their outputs is critically important from a public health perspective. This study comparatively evaluated the readability of responses generated by ChatGPT and Google Gemini to ten core postnatal questions commonly posed by migrant parents to pediatricians. The ten questions were translated into English and repeatedly submitted to both models on different days and in varying orders. The resulting texts were analyzed using multiple readability tests, including the Automated Readability Index, Flesch Reading Ease, Flesch–Kincaid Grade Level, SMOG Index, and Linsear Write Formula. The findings indicate that Gemini systematically produced more complex texts requiring a higher literacy level in most indices. In contrast, ChatGPT yielded more easily understandable content on scales closely related to health literacy, such as Flesch Reading Ease and SMOG. Overall, both models show potential to support access to health information among migrant populations; however, the complexity of their outputs remains above recommended public health literacy levels, highlighting the need for further optimization and text simplification, particularly for groups with low health literacy.
References
1. International Organization for Migration. Glossary on Migration. (trans. ed. Çiçekli B). 2009;22–36. Available from: http://www.goc.gov.tr/files/files/goc_terimleri_sozlugu.pdf [in Turkish]
2. Keleşmehmet H. Utilization of health services provided at migrant health centers by Syrian migrants. Specialty thesis. Marmara University Faculty of Medicine, Department of Family Medicine; 2018. Istanbul, Türkiye. [in Turkish]
3. Aydemir AM. Warfarin Use: A Readability Comparison of Gemini and ChatGPT. Acta Medica Young Doctors. 2025;1(2):59–65. https://doi.org/10.5281/zenodo.17156371
4. Smith EA, Senter RJ. Automated readability index. AMRL TR. 1967 May:1–14. PMID: 5302480.
5. McClure G. Readability formulas: useful or useless (an interview with J. Peter Kincaid). IEEE Transactions on Professional Communication. 1987;30:12–15.
6. Seely J. Chapter 10: Audience. In: Oxford Guide to Effective Writing and Speaking: How to Communicate Clearly. Oxford: Oxford University Press; 2013. p.120–123. ISBN 978-0-19-965270-9.
7. Kincaid JP, Braby R, Mears J. Electronic authoring and delivery of technical information. Journal of Instructional Development. 1988;11(2):8–13. doi:10.1007/bf02904998.
8. Coleman M, Liau TL. A computer readability formula designed for machine scoring. Journal of Applied Psychology. 1975;60:283–284.
9. Ley P, Florio T. The use of readability formulas in health care. Psychology, Health & Medicine. 1996;1(1):7–28. doi:10.1080/13548509608400003.
10. OpenAI. ChatGPT (GPT-5.1) [large language model]. Available from: https://www.openai.com
11. Google DeepMind. Gemini [large language model]. Available from: https://deepmind.google
12. Arslan S, Küçükbezirci GU. Evaluating the accuracy, completeness, and readability of chatbot responses to refractive surgery-related patient questions: a comparative analysis of ChatGPT and Google Gemini. Cureus. 2025. https://doi.org/10.7759/cureus.88980
13. Adithya S, Aggarwal S, Sridhar J, VS K, John V, Singh C. An observational study to evaluate readability and reliability of AI-generated brochures for emergency medical conditions. Cureus. 2024 Aug 31.
14. Aydın FO, Aksoy BK, Ceylan A, Akbaş YB, Ermiş S, Yıldız BK, et al. Readability and appropriateness of responses generated by ChatGPT 3.5, ChatGPT 4.0, Gemini, and Microsoft Copilot for FAQs in refractive surgery. Turkish Journal of Ophthalmology. 2025;54:313–317.
15. Erkan Acar EG, Avan BA. Evaluation of the responses from different chatbots to frequently asked patient questions about impacted canines. Australasian Orthodontic Journal. 2025;41(1):288–300. https://doi.org/10.2478/aoj-2025-0020
16. Kara M, Özduran E, Mercan Kara M, Özbek İC, Hancı V. Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about ankylosing spondylitis. PLOS ONE. 2025;20(6):e0326351. https://doi.org/10.1371/journal.pone.0326351
17. Strzalkowski P, Strzalkowska A, Chhablani J, Heß K, Errera M-H, Roth M, et al. Evaluation of the accuracy and readability of ChatGPT-4 and Google Gemini in providing information on retinal detachment: a multicenter expert comparative study. International Journal of Retina and Vitreous. 2024;10(1). https://doi.org/10.1186/s40942-024-00579-9
18. Özduran E, Hancı V, Erkin Y, Özbek İC, Abdulkerimov V. Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain. PeerJ. 2025;13:e18847. https://doi.org/10.7717/peerj.18847
19. Tepe M, Emekli E. Assessing the responses of large language models (ChatGPT-4, Gemini, and Microsoft Copilot) to frequently asked questions in breast imaging: a study on readability and accuracy. Cureus. 2024;16. https://doi.org/10.7759/cureus.59960
20. Liang Z, Kuang Y, Liang X, Ge L, Li Z. Research on the application of generative artificial intelligence to evaluate responses related to questions about COVID-19 in terms of their accuracy and readability. Preprints. 2025. https://doi.org/10.20944/preprints202505.1319.v1
21. Mutlucan UO. Evaluation of readability indices of ChatGPT-4 and Google Gemini in cervical disc herniation: evaluation of readability indices of ChatGPT-4 and Google Gemini in cervical disc herniation. Acta Medica Young Doctors. 2025;1(2):66–72. https://doi.org/10.5281/zenodo.17156411
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Utku Dönger, Ahmet Can Doğan

This work is licensed under a Creative Commons Attribution 4.0 International License.