ISSN: 1309 - 3843 E-ISSN: 1307 - 7384
FİZİKSEL TIP VE REHABİLİTASYON
BİLİMLERİ DERGİSİ
www.jpmrs.com
Kayıtlı İndexler


ORIJINAL ARAŞTIRMA

Yapay Zekâ Destekli Osteoartrit Bilgilendirme Metinlerinin Değerlendirilmesi: İçerik Kalitesi ve Okunabilirlik Analizi
Evaluation of Artificial Intelligence Supported Osteoarthritis Information Texts: Content Quality and Readability Analysis
Received Date : 19 Apr 2024
Accepted Date : 13 Nov 2024
Available Online : 27 Nov 2024
Doi: 10.31609/jpmrs.2024-103532 - Makale Dili: EN
Turkiye Klinikleri Journal of Physical Medicine and Rehabilitation Sciences. 2025;28(1):21-9
ÖZET
Amaç: Bu çalışmanın amacı,osteoartrit ile ilgili yapay zeka destekli oluşturulan metinlerin içeriğinin kalitesini,okunabilirliğini ve anlaşılabilirliğini kapsamlı bir şekilde değerlendirmektir. Gereç ve Yöntemler: Google Trends üzerinden osteoartrit ile ilgili en sık aranan anahtar kelimeler belirlendi. Belirlenen anahtar kelimelerle birlikte,osteoartrit hakkında hasta tarafından sıkça sorulan sorular seçildi. Belirlenen anahtar kelimeler ve sorular sırayla ChatGPT'ye girildi. Belirlenen anahtar kelimeler ve sorular ChatGPT'ye aktarıldı. Bilginin netliği ve yazım kalitesini değerlendirmek için Hastalar için Kaliteli Bilgi Sağlama aracı (EQIP) kullanıldı. Metinlerin okunabilirliğini değerlendirmek için Flesch–Kincaid okunabilirlik testleri (Okuma Kolaylığı ve Sınıf Düzeyi) ve Gunning Fog İndeksi (GFI) kullanıldı. Metinlerin güvenilirliği ve yararlılığını, güvenilirlik ve yararlılık ölçeği kullanılarak değerlendirildi. Bulgular: Metinlerin ortalama EQIP skoru 62,01±6,61'di. Flesch–Kincaid Okuma Kolaylığı (FKRE) ortalama skoru ise 31,85±12,44'tü. Flesch–Kincaid Sınıf Düzeyi (FKGL) için ortalama skor 13,26±2,12'ydi. GFI skoru ortalaması ise 14,52±2,41’di. Metinlerin ortalama Güvenilirlik puanı 5.10±1.02’di. Metinlerin ortalama Yararlılık puanı 4,89±0,76’ dı. Çalışmamız, ChatGPT'nin osteoartrit konusundaki yanıtlarının genel olarak “küçük sorunlarla birlikte iyi kaliteli” olduğu sonucuna varmaktadır. Ayrıca, üretilen metinlerin yaklaşık 13 yıl eğitim gerektirecek karmaşıklıkta olduğu belirlendi. Anahtar kelimeler kullanılarak oluşturulan metinlerden elde edilen EQIP skoru ile sorular kullanılarak oluşturulan metinlerden elde edilen EQIP skoru karşılaştırıldığında,istatistiksel olarak anlamlı bir farklılık gözlemlenmiştir (p<0.001). Ancak, iki grup arasında FKRE, FKGL, GFI, Güvenilirlik ölçeği ve Yararlılık ölçeği skorları açısından incelendiğinde, istatistiksel olarak anlamlı bir farklılık bulunmamıştır. (sırasıyla, p=0.063, p=0.059, p=0.194, p=0,466, p=0,499). Sonuç: Bu çalışma, ChatGPT'nin osteoartrit hakkındaki metinlerinin kalite ve okunabilirlik konusunda belirli eksikliklerin bulunduğunu ortaya koymaktadır. Sonuç olarak, çevrimiçi kaynakların ve yapay zeka araçlarının sağlık alanında bilgi sunumunda önemli bir rol oynadığını, ancak kalite ve okunabilirlik kontrolünün sağlanması gerektiğini vurgulamaktadır. Bu, hastaların doğru, güvenilir ve anlaşılır bilgilere erişimini sağlamanın yanı sıra,sağlık okuryazarlığını artırarak daha bilinçli ve etkin sağlık kararları alabilmelerine yardımcı olabilir.
ABSTRACT
Objective: This study aims to comprehensively evaluate the quality,readability,and understandability of artificial intelligence-supported texts related to osteoarthritis(OA). Material and Methods: The most frequently searched keywords related to osteoarthritis were determined through Google Trends. Additionally, frequently asked questions by patients about osteoarthritis were identified. These keywords and questions were entered into ChatGPT. The Ensuring Quality Information for Patients tool(EQIP) was used to assess the clarity of information and quality of writing.Flesch- Kincaid-readability-tests (Reading-Ease and Grade-Level) and Gunning- Fog-Index (GFI) were used to assess the readability of the texts.The reliability and usefulness of the texts were assessed were used the reliability and usefulness scale. Results: The average scores were: EQIP 62.01±6.61, FKRE 31.85±12.44, FKGL 13.26±2.12,GFI 14.52±2.41, reliability 5.10±1.02,and usefulness 4.89±0.76. Our study concludes that Chat- GPT's responses on osteoarthritis are generally of “good-quality with minor-issues”. Additionally, it was determined that the texts produced were of complexity that they would require approximately 13 years of education. When the EQIP score obtained from texts created using keywords was compared with the EQIP score obtained from texts created using questions,a statistically significant difference was observed (p<0.001). However, when examined in terms of FKRE, FKGL, GFI, Reliability-Scale and Usefulness- Scale scores between the two groups,no statistically significant difference was found. (respectively, p=0.063, p=0.059, p=0.194, p=0,466, p=0,499). Conclusion: This-study reveals that ChatGPT's texts on OA have certain deficiencies in quality and readability.In conclusion,it emphasizes that online resources and AI tools play an important role in information provision in the field of healthcare,but quality and readability control should be ensured. In addition to ensuring patients have access to accurate,reliable and understandable information,this can help them make more informed and effective health decisions by increasing their health literacy.
REFERENCES
  1. Hawker GA, King LK. The burden of osteoarthritis in older adults. Clin Geriatr Med. 2022;38:181-92. [Crossref]  [PubMed] 
  2. Katz JN, Arant KR, Loeser RF. Diagnosis and treatment of hip and knee osteoarthritis: a review. JAMA. 2021;325:568-78. [Crossref]  [PubMed]  [PMC] 
  3. Allen KD, Thoma LM, Golightly YM. Epidemiology of osteoarthritis. Osteoarthritis Cartilage. 2022;30:184-95. [Crossref]  [PubMed]  [PMC] 
  4. Hunter DJ, March L, Chew M. Osteoarthritis in 2020 and beyond: a Lancet Commission. Lancet. 2020;396:1711-2. [Crossref]  [PubMed] 
  5. van Hartskamp M, Consoli S, Verhaegh W, et al. Artificial intelligence in clinical health care applications: viewpoint. Interact J Med Res. 2019;8:e12100. [Crossref]  [PubMed]  [PMC] 
  6. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40. [Crossref]  [PubMed] 
  7. Chow JCL, Sanders L, Li K. Impact of ChatGPT on medical chatbots as a disruptive technology. Front Artif Intell. 2023;6:1166014. [Crossref]  [PubMed]  [PMC] 
  8. Li J, Dada A, Puladi B, et al. ChatGPT in healthcare: a taxonomy and systematic review. Comput Methods Programs Biomed. 2024;245:108013. [Crossref]  [PubMed] 
  9. Chapman L, Brooks C, Lawson J, et al. Accessibility of online self-management support websites for people with osteoarthritis: a text content analysis. Chronic Illn. 2019;15:27-40. [Crossref]  [PubMed] 
  10. Varady NH, Dee EC, Katz JN. International assessment on quality and content of internet information on osteoarthritis. Osteoarthritis Cartilage. 2018;26:1017-26. [Crossref]  [PubMed] 
  11. Erden Y, Temel MH, Bağcıer F. Artificial intelligence insights into osteoporosis: assessing ChatGPT's information quality and readability. Arch Osteoporos. 2024;19:17. [Crossref]  [PubMed] 
  12. Claassen AAOM, Kremers-van de Hei KCALC, van den Hoogen FHJ, et al. Most important frequently asked questions from patients with hip or knee osteoarthritis: a best-worst scaling exercise. Arthritis Care Res (Hoboken). 2019;71:885-92. [Crossref]  [PubMed] 
  13. Moult B, Franck LS, Brady H. Ensuring quality information for patients: development and preliminary validation of a new instrument to improve the quality of written health care information. Health Expect. 2004;7:165-75. [Crossref]  [PubMed]  [PMC] 
  14. Ladhar S, Koshman SL, Yang F, et al. Evaluation of online written medication educational resources for people living with heart failure. CJC Open. 2022;4:858-65. [Crossref]  [PubMed]  [PMC] 
  15. Benzer A. [A step toward the formula of readability based on artificial intelligence]. Research and Experience Journal. 2020;5:47-82. [Link] 
  16. Uz C, Umay E. "Dr ChatGPT": Is it a reliable and useful source for common rheumatic diseases? Int J Rheum Dis. 2023;26:1343-9. [Crossref]  [PubMed] 
  17. Barrow A, Palmer S, Thomas S, et al. Quality of web-based information for osteoarthritis: a cross-sectional study. Physiotherapy. 2018;104:318-26. [Crossref]  [PubMed] 
  18. Anderson KJ, Walker RJ, Lynch JM, et al. A qualitative evaluation of internet information on hip and knee osteoarthritis. Ann R Coll Surg Engl. 2023;105:729-33. [Crossref]  [PubMed]  [PMC] 
  19. Kolasinski SL, Neogi T, Hochberg MC, et al. 2019 American College of Rheumatology/Arthritis Foundation Guideline for the Management of Osteoarthritis of the Hand, Hip, and Knee. Arthritis Care Res (Hoboken). 2020;72:149-62. Erratum in: Arthritis Care Res (Hoboken). 2021;73:764. [Crossref]  [PubMed]  [PMC] 
  20. Moseng T, Vliet Vlieland TPM, Battista S, et al. EULAR recommendations for the non-pharmacological core management of hip and knee osteoarthritis: 2023 update. Ann Rheum Dis. 2024;83:730-40. [Crossref]  [PubMed]  [PMC] 
  21. Gravina AG, Pellegrino R, Cipullo M, et al. May ChatGPT be a tool producing medical information for common inflammatory bowel disease patients' questions? An evidence-controlled analysis. World J Gastroenterol. 2024;30:17-33. [Crossref]  [PubMed]  [PMC] 
  22. Temel MH, Batıbay S, Bağcıer F. Quality and readability of online information on cerebral palsy. Journal of Consumer Health on the Internet. 2023;27:266-81. [Crossref] 
  23. Hong SW, Kang JH, Park JH, et al. Quality and readability of online information on hand osteoarthritis. Health Informatics J. 2023;29:14604582231169297. [Crossref]  [PubMed] 
  24. Ozduran E, Hanci V. Evaluating the readability, quality, and reliability of online information on Sjogren's syndrome. Indian Journal of Rheumatology. 2023;18:16-25. [Link] 
  25. Kaya E, Görmez S. Quality and readability of online information on plantar fasciitis and calcaneal spur. Rheumatol Int. 2022;42:1965-72. [Crossref]  [PubMed] 
  26. Hanci V, Bıyıkoğlu BO, Bıyıkoğlu AS. How readable the online patient education materials of intensive and critical care societies: assessment of the readability. Journal of Critical Care. 2024;81:154713. [Crossref] 
  27. Younis HA, Eisa TAE, Nasser M, et al. A systematic review and meta-analysis of artificial intelligence tools in medicine and healthcare: applications, considerations, limitations, motivation and challenges. Diagnostics (Basel). 2024;14:109. [Crossref]  [PubMed]  [PMC] 
  28. Coşkun AB, Elmaoğlu E, Buran C, et al. Integration of Chatgpt and E-health literacy: opportunities, challenges, and a look towards the future. Journal of Health Reports and Technology. 2024;10:e139748. [Crossref] 
  29. Temel MH, Erden Y, Bağcıer F. Information quality and readability: ChatGPT's responses to the most common questions about spinal cord injury. World Neurosurg. 2024;181:e1138-e44. [Crossref]  [PubMed] 
  30. Javaid M, Haleem A, Singh RP. ChatGPT for healthcare services: an emerging stage for an innovative perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations. 2023;3:100105. [Crossref]