The Review of Studies on Explainable Artificial Intelligence in Educational Research
dc.contributor.author | Türkmen G. | |
dc.date.accessioned | 2025-04-10T11:02:43Z | |
dc.date.available | 2025-04-10T11:02:43Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Explainable Artificial Intelligence (XAI) refers to systems that make AI models more transparent, helping users understand how outputs are generated. XAI algorithms are considered valuable in educational research, supporting outcomes like student success, trust, and motivation. Their potential to enhance transparency and reliability in online education systems is particularly emphasized. This study systematically analyzed educational research using XAI systems from 2019 to 2024, following the PICOS framework, and reviewed 35 studies. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), used in these studies, explain model decisions, enabling users to better understand AI models. This transparency is believed to increase trust in AI-based tools, facilitating their adoption by teachers and students. © The Author(s) 2024. | |
dc.identifier.DOI-ID | 10.1177/07356331241310915 | |
dc.identifier.uri | http://hdl.handle.net/20.500.14701/44224 | |
dc.publisher | SAGE Publications Inc. | |
dc.title | The Review of Studies on Explainable Artificial Intelligence in Educational Research | |
dc.type | Article |