Browsing by Author "Türkmen, G"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidatesHopcan, S; Türkmen, G; Polat, EWith the advancement of artificial intelligence (AI) and machine learning (ML) techniques, attitudes towards these two fields have begun to gain importance in different professions. One of the affected professions is undoubtedly the teaching profession. Increasing the levels of concern for artificial intelligence and attitudes towards machine learning has become important in order to adapt to potential technologies that will be used. The purpose of this study is to examine the anxiety related to AI and the attitudes towards ML among teacher candidates of different ages, genders, and fields. This study investigates the relationships between sub-dimensions of anxiety towards artificial intelligence and attitudes towards machine learning, as well as to identify differences in these sub-dimensions among gender, age, and department. The findings suggest that although teacher candidates from different disciplines, ages, and genders do not have any concerns regarding learning about artificial intelligence, they do express anxiety about the impact of artificial intelligence on employment rates and social life. The results of this study can be beneficial for developing instructional programs that focus on AI in the long run, considering factors such as age, personal experience, gender, and field-specific elements.Item The Review of Studies on Explainable Artificial Intelligence in Educational ResearchTürkmen, GExplainable Artificial Intelligence (XAI) refers to systems that make AI models more transparent, helping users understand how outputs are generated. XAI algorithms are considered valuable in educational research, supporting outcomes like student success, trust, and motivation. Their potential to enhance transparency and reliability in online education systems is particularly emphasized. This study systematically analyzed educational research using XAI systems from 2019 to 2024, following the PICOS framework, and reviewed 35 studies. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), used in these studies, explain model decisions, enabling users to better understand AI models. This transparency is believed to increase trust in AI-based tools, facilitating their adoption by teachers and students.