Salem Journal of Science, Information & Communication Technology

Framework for Explaining Black-Box Models Using Explainable AI (XAI)

Awodele S. O, Fayemi T. A,, Ojuawo O. O,, Olorunyomi O. B,, Mustapha M. M,, Faruna J. O, Chukwulobe I
February 24, 2026

Abstract

Further development of Artificial Intelligence (AI) and, especially, in such complex systems as Deep Learning (DL) and Large Language Models (LLM) resulted in their widespread application to the essential fields of human activity, such as healthcare, finance, and education. Non-linear, complex designs of these high-performance models, however, make them black boxes that are hard to understand, and their inner workings and decision-making are hard to interpret. This has brought up a high concern on the issue of trust, accountability, and ethical governance. This paper evaluates how the Explainable Artificial Intelligence (XAI) can alleviate this issue by rendering black-box models more transparent, understandable and interpretable to end-users and stakeholders. XAI solves the problem of interpreting complex algorithms and making them human friendly by offering ways of describing the processing of input data and decision formation. The significance of XAI is supported by the necessity to comply with the regulation, such as the General Data Protection Regulation (GDPR) that requires accountability of the automated decision-making, as well as the requirement to be fair, to detect and reduce biases hidden in the models. The three primary approaches of XAI methods are; Model-Agnostic Post-Hoc Interpreters (MAPHI): Techniques that are used after a model is trained, such as LIME and SHAP, that explain a prediction locally or globally; Intrinsically Interpretable Models (IIMs): Models that are inherently interpretable, such as decision trees though they can be less predictive power than LLMs; Overarching Frameworks and Auditing (OFA): Governance frameworks such as Responsible AI (RAI) that enact principles like Fairness, The problems of XAI, including the inherent trade-off between model accuracy and interpretability and the threat of explanation hacking are also addressed. To solve these problems, models such as OpenXAI are being studied to standardize the technical critique of the methods of explanation in terms of such important measures as faithfulness, stability, and fairness. Finally, XAI is not merely a technical requirement but an ethical foundation of successful AI implementation, as it is necessary to make the systems more human-centred and transparent, to allow building more trust and to enable the responsible AI development.

Download Full PDF

This article is available as a PDF download

Salem Journal of Science, Information & Communication Technology

Published in Salem Journal of Science, Information & Communication Technology

ISSN: 627-4467X

This article appears in our peer-reviewed academic journal

View Journal

Related Articles

Explore similar research in our collection

TRANSFER FACTOR ANALYSIS OF RADIONUCLIDES OF WATER, SEDIMENTS AND FISH (BONGA SHAD) IN COASTAL COMMUNITIES OF OKRIKA, RIVERS STATE, NIGERIA

Gbarato O. L.,, Sokari S. A., & Ononugbo C. P.

Apr 28, 2026

Analysis of transfer ration of radionuclides concentration of Water to Fish (Bonga Shad) and Sedim...

View Article

KNOWLEDGE OF DUAL CONTRACEPTIVE USE AMONG SEXUALLY ACTIVE WOMEN LIVING WITH HIV IN OYO STATE.

TOLA MOYOSOLA OSANYINBI

Mar 25, 2026

Dual contraceptive use is a critical approach to preventing unintended pregnancies and reducing the ...

View Article

EFFECTS OF SEWAGE ON SEAFOOD AND FRESHWATER BIODIVERSITY IN NIGERIA

DR. N. O. IZUCHUKWU, STEPHEN BLESSING ESTHER

Mar 18, 2026

Now a day’s there is an increasing recognition that freshwater is a valuable resource due to overe...

View Article