Framework for Explaining Black-Box Models Using Explainable AI (XAI)
Abstract
Further development of Artificial Intelligence (AI) and, especially, in such complex systems as Deep Learning (DL) and Large Language Models (LLM) resulted in their widespread application to the essential fields of human activity, such as healthcare, finance, and education. Non-linear, complex designs of these high-performance models, however, make them black boxes that are hard to understand, and their inner workings and decision-making are hard to interpret. This has brought up a high concern on the issue of trust, accountability, and ethical governance. This paper evaluates how the Explainable Artificial Intelligence (XAI) can alleviate this issue by rendering black-box models more transparent, understandable and interpretable to end-users and stakeholders. XAI solves the problem of interpreting complex algorithms and making them human friendly by offering ways of describing the processing of input data and decision formation. The significance of XAI is supported by the necessity to comply with the regulation, such as the General Data Protection Regulation (GDPR) that requires accountability of the automated decision-making, as well as the requirement to be fair, to detect and reduce biases hidden in the models. The three primary approaches of XAI methods are; Model-Agnostic Post-Hoc Interpreters (MAPHI): Techniques that are used after a model is trained, such as LIME and SHAP, that explain a prediction locally or globally; Intrinsically Interpretable Models (IIMs): Models that are inherently interpretable, such as decision trees though they can be less predictive power than LLMs; Overarching Frameworks and Auditing (OFA): Governance frameworks such as Responsible AI (RAI) that enact principles like Fairness, The problems of XAI, including the inherent trade-off between model accuracy and interpretability and the threat of explanation hacking are also addressed. To solve these problems, models such as OpenXAI are being studied to standardize the technical critique of the methods of explanation in terms of such important measures as faithfulness, stability, and fairness. Finally, XAI is not merely a technical requirement but an ethical foundation of successful AI implementation, as it is necessary to make the systems more human-centred and transparent, to allow building more trust and to enable the responsible AI development.
This article is available as a PDF download
Published in Salem Journal of Science, Information & Communication Technology
ISSN: 627-4467X
This article appears in our peer-reviewed academic journal
View JournalRelated Articles
Explore similar research in our collection
TRANSFER FACTOR ANALYSIS OF RADIONUCLIDES OF WATER, SEDIMENTS AND FISH (BONGA SHAD) IN COASTAL COMMUNITIES OF OKRIKA, RIVERS STATE, NIGERIA
Gbarato O. L.,, Sokari S. A., & Ononugbo C. P.
Apr 28, 2026
Analysis of transfer ration of radionuclides concentration of Water to Fish (Bonga Shad) and Sedim...
View ArticleKNOWLEDGE OF DUAL CONTRACEPTIVE USE AMONG SEXUALLY ACTIVE WOMEN LIVING WITH HIV IN OYO STATE.
TOLA MOYOSOLA OSANYINBI
Mar 25, 2026
Dual contraceptive use is a critical approach to preventing unintended pregnancies and reducing the ...
View ArticleEFFECTS OF SEWAGE ON SEAFOOD AND FRESHWATER BIODIVERSITY IN NIGERIA
DR. N. O. IZUCHUKWU, STEPHEN BLESSING ESTHER
Mar 18, 2026
Now a day’s there is an increasing recognition that freshwater is a valuable resource due to overe...
View Article