INTERNATIONAL JOURNAL OF SCIENCE EDUCATION AND ENVIRONMENTAL RESEARCH

Framework for Explaining Black-Box Models Using Explainable AI (XAI)

Awodele S. O, Fayemi T. A,, Ojuawo O. O,, Olorunyomi O. B,, Mustapha M. M,, Faruna J. O, Chukwulobe I
February 24, 2026

Abstract

Further development of Artificial Intelligence (AI) and, especially, in such complex systems as Deep Learning (DL) and Large Language Models (LLM) resulted in their widespread application to the essential fields of human activity, such as healthcare, finance, and education. Non-linear, complex designs of these high-performance models, however, make them black boxes that are hard to understand, and their inner workings and decision-making are hard to interpret. This has brought up a high concern on the issue of trust, accountability, and ethical governance. This paper evaluates how the Explainable Artificial Intelligence (XAI) can alleviate this issue by rendering black-box models more transparent, understandable and interpretable to end-users and stakeholders. XAI solves the problem of interpreting complex algorithms and making them human friendly by offering ways of describing the processing of input data and decision formation. The significance of XAI is supported by the necessity to comply with the regulation, such as the General Data Protection Regulation (GDPR) that requires accountability of the automated decision-making, as well as the requirement to be fair, to detect and reduce biases hidden in the models. The three primary approaches of XAI methods are; Model-Agnostic Post-Hoc Interpreters (MAPHI): Techniques that are used after a model is trained, such as LIME and SHAP, that explain a prediction locally or globally; Intrinsically Interpretable Models (IIMs): Models that are inherently interpretable, such as decision trees though they can be less predictive power than LLMs; Overarching Frameworks and Auditing (OFA): Governance frameworks such as Responsible AI (RAI) that enact principles like Fairness, The problems of XAI, including the inherent trade-off between model accuracy and interpretability and the threat of explanation hacking are also addressed. To solve these problems, models such as OpenXAI are being studied to standardize the technical critique of the methods of explanation in terms of such important measures as faithfulness, stability, and fairness. Finally, XAI is not merely a technical requirement but an ethical foundation of successful AI implementation, as it is necessary to make the systems more human-centred and transparent, to allow building more trust and to enable the responsible AI development.

Download Full PDF

This article is available as a PDF download

INTERNATIONAL JOURNAL OF SCIENCE EDUCATION AND ENVIRONMENTAL RESEARCH

Published in INTERNATIONAL JOURNAL OF SCIENCE EDUCATION AND ENVIRONMENTAL RESEARCH

ISSN: 8343-6971

This article appears in our peer-reviewed academic journal

View Journal

Related Articles

Explore similar research in our collection

Parental Neglect And Mental Health Outcomes Among Children In Ogba-Egbema-Ndoni Local Government Area, Rivers State, Nigeria

Eke Veronica Ph.D., Logbene Chidorom Ann

Jan 6, 2026

This study investigates the relationship between parental neglect and the mental health of children ...

View Article

Determinants Of Sedentary Behaviour Among Young Adults In Port Harcourt Local Government Area, Rivers State

Kpai, Tonubari, Ewere, Wonderful

Oct 14, 2025

Sedentary behaviour characterised by prolonged sitting or low energy activities such as screen time ...

View Article

Effects of Blended Learning and Guided Inquiry on Students’ Academic Achievement in Biology in Delta State

Amaechi, Chukuwike Jeffery

Jun 20, 2025

This study investigated the effects of blended learning and guided inquiry on the academic achieveme...

View Article