Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI)

dc.contributor.authorCadamuro J.
dc.contributor.authorCabitza F.
dc.contributor.authorDebeljak Z.
dc.contributor.authorDe Bruyne S.
dc.contributor.authorFrans G.
dc.contributor.authorPerez S.M.
dc.contributor.authorOzdemir H.
dc.contributor.authorTolios A.
dc.contributor.authorCarobene A.
dc.contributor.authorPadoan A.
dc.date.accessioned2024-07-22T08:02:57Z
dc.date.available2024-07-22T08:02:57Z
dc.date.issued2023
dc.description.abstractObjectives: ChatGPT, a tool based on natural language processing (NLP), is on everyone's mind, and several potential applications in healthcare have been already proposed. However, since the ability of this tool to interpret laboratory test results has not yet been tested, the EFLM Working group on Artificial Intelligence (WG-AI) has set itself the task of closing this gap with a systematic approach. Methods: WG-AI members generated 10 simulated laboratory reports of common parameters, which were then passed to ChatGPT for interpretation, according to reference intervals (RI) and units, using an optimized prompt. The results were subsequently evaluated independently by all WG-AI members with respect to relevance, correctness, helpfulness and safety. Results: ChatGPT recognized all laboratory tests, it could detect if they deviated from the RI and gave a test-by-test as well as an overall interpretation. The interpretations were rather superficial, not always correct, and, only in some cases, judged coherently. The magnitude of the deviation from the RI seldom plays a role in the interpretation of laboratory tests, and artificial intelligence (AI) did not make any meaningful suggestion regarding follow-up diagnostics or further procedures in general. Conclusions: ChatGPT in its current form, being not specifically trained on medical data or laboratory data in particular, may only be considered a tool capable of interpreting a laboratory report on a test-by-test basis at best, but not on the interpretation of an overall diagnostic picture. Future generations of similar AIs with medical ground truth training data might surely revolutionize current processes in healthcare, despite this implementation is not ready yet. © 2023 Walter de Gruyter GmbH, Berlin/Boston.
dc.identifier.DOI-ID10.1515/cclm-2023-0355
dc.identifier.issn14346621
dc.identifier.urihttp://akademikarsiv.cbu.edu.tr:4000/handle/123456789/12068
dc.language.isoEnglish
dc.publisherDe Gruyter Open Ltd
dc.rightsAll Open Access; Bronze Open Access
dc.subjectArtificial Intelligence
dc.subjectChemistry, Clinical
dc.subjectHumans
dc.subjectLaboratories
dc.subjectalanine aminotransferase
dc.subjectalkaline phosphatase
dc.subjectaspartate aminotransferase
dc.subjectbilirubin
dc.subjectcreatinine
dc.subjectferritin
dc.subjectgamma glutamyltransferase
dc.subjectglucose
dc.subjecthemoglobin A1c
dc.subjecthigh density lipoprotein cholesterol
dc.subjectlow density lipoprotein cholesterol
dc.subjectprostate specific antigen
dc.subjectthyrotropin
dc.subjectactivated partial thromboplastin time
dc.subjectArticle
dc.subjectartificial intelligence
dc.subjectblood cell count
dc.subjectcontrolled study
dc.subjectdata interpretation
dc.subjectfollow up
dc.subjectfree thyroxine index
dc.subjecthuman
dc.subjectlaboratory test
dc.subjectprothrombin time
dc.subjectreference value
dc.subjectclinical chemistry
dc.subjectlaboratory
dc.titlePotentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI)
dc.typeArticle

Files