The Review of Studies on Explainable Artificial Intelligence in Educational Research

dc.contributor.authorTürkmen, G
dc.date.accessioned2025-04-10T10:27:19Z
dc.date.available2025-04-10T10:27:19Z
dc.description.abstractExplainable Artificial Intelligence (XAI) refers to systems that make AI models more transparent, helping users understand how outputs are generated. XAI algorithms are considered valuable in educational research, supporting outcomes like student success, trust, and motivation. Their potential to enhance transparency and reliability in online education systems is particularly emphasized. This study systematically analyzed educational research using XAI systems from 2019 to 2024, following the PICOS framework, and reviewed 35 studies. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), used in these studies, explain model decisions, enabling users to better understand AI models. This transparency is believed to increase trust in AI-based tools, facilitating their adoption by teachers and students.
dc.identifier.e-issn1541-4140
dc.identifier.issn0735-6331
dc.identifier.urihttp://hdl.handle.net/20.500.14701/34892
dc.language.isoEnglish
dc.titleThe Review of Studies on Explainable Artificial Intelligence in Educational Research
dc.typeArticle

Files