INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND MATHEMATICAL THEORY (IJCSMT )
E-ISSN 2545-5699
P-ISSN 2695-1924
VOL. 11 NO. 5 2025
DOI: 10.56201/ijcsmt.vol.11.no5.2025.pg63.76
McKelly Tamunotena Pepple, Efiyeseimokumo Sample Ikeremo
Some results that Artificial Intelligence models obtain are unclear to the users while some target audiences may not understand others. This paper propounds a system that fuses a dense neural network with Local Interpretable Model-agnostic Explanations (LIME) to evaluate the results attained on the Adult Census Income set. The model fits and predicts with reasonable success the income based on features such as Socioeconomic indications of work class and nationality. Similarly, features such as capital gain and marital status which can be allied to higher income are also clearly revealed. The main advantage of the hybrid approach is that it cannot only guarantee the high accuracy of predictions, but also the results obtained can be easily explained, so it is useful in fields that require clear interpretation, such as finance and social sciences. The explanation given by the LIME algorithm makes it possible to understand why a particular choice has been made and what features are significant for it. The Hybrid model achieved an accuracy of 87%, a Receiver Operating Characteristic (ROC) score of 91%, and a Precision-Recall of 76% confidence level in clarifying and accurately predicting an individual’s income, which it predicted to be less than or equal to $50[K] with the LIME explaining how each of the features of the dataset influenced and determine the decision of the hybrid model.
Explanation, Artificial Intelligence, Local Interpretable, Model-agnostic, income.
Singh, P., Kumar, M., & Singh, V. (2024). Improving object recognition in crime scenes via
local interpretable model-agnostic explanations (LIME). IEEE Access, 9, 122345–
https://doi.org/10.1109/ACCESS.2024.
Akkas, S., & Azad, A. (2024). GNNShap: Scalable and accurate GNN explanation using
Shapley values. arXiv Preprint, arXiv:2401.04829. https://arxiv.org/abs/2401.04829
Nguyen, M. T., Ngo, Q. T., Tran, M. S., & Pham, V. H. (2024, October). Bi-gradient
verification for Grad-CAM towards accurate visual explanations. In Proceedings of the
IEEE International Conference on Image Processing (ICIP), Singapore.
Zhang, Y., Chen, E., & Wang, J. (2021). Explainable AI: Advances and challenges. IEEE
Transactions on Neural Networks and Learning Systems, 32(8), 3560–3573.
https://doi.org/10.1109/TNNLS.2020.
Adadi, A., & Berrada, M. (2022). Peeking inside the black-box: A survey on explainable
artificial
intelligence
(XAI).
IEEE
Access,
6,
52138–52160.
https://doi.org/10.1109/ACCESS.2022.
Salih, A. M., Raisi-Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Menegaz, G., &
Lekadir, K. (2024). A perspective on explainable artificial intelligence methods: SHAP
and LIME. Computational Intelligence, 40(4), 2831–2847. https://doi.org/10.1111/coin.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ...
& Herrera, F. (2024). Explainable artificial intelligence (XAI): Concepts, taxonomies,
opportunities, and challenges. Computational Intelligence, 40(4), 2831–2847.
https://doi.org/10.1111/coin.
Vilone, G., & Longo, L. (2020). Explainable artificial intelligence: A systematic review. arXiv
Preprint, arXiv:2006.00093. https://arxiv.org/abs/2006.00093
Poeta, E., Ciravegna, G., Pastor, E., Cerquitelli, T., & Baralis, E. (2023). Concept-based
explainable artificial intelligence: A survey. arXiv Preprint, arXiv:2312.12936.
https://arxiv.org/abs/2312.12936
Mondal, M. R. H., Bharati, S., & Podder, P. (2023). A review on explainable artificial
intelligence for healthcare: Why, how, and when? arXiv Preprint, arXiv:2302.00031.
https://arxiv.org/abs/2302.00031
Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of
explainable artificial intelligence in terms of different application domains and tasks.
Applied Sciences, 12(3), 1353. https://doi.org/10.3390/app12031353
Vo, D. M., Nguyen, D. T., Hoang, N.-D., Nguyen, Q. V. H., & Phung, D. (2021). Explainable
artificial intelligence: A comprehensive review. Artificial Intelligence Review, 55, 3503–
https://doi.org/10.1007/s10462-021-10055-7
Rengifo-Moreno, P., Miller, T., Bian, J., Chen, J. H., Liu, X., He, Z., Payrovnaziri, S. N., &
Chen, Z. (2020). Explainable artificial intelligence models using real-world electronic
health record data: A systematic scoping review. Journal of the American Medical
Informatics Association, 27(7), 1173–1185. https://doi.org/10.1093/jamia/ocaa086
Yao, H., Atakishiyev, S., Salameh, M., & Goebel, R. (2021). Explainable artificial intelligence
for autonomous driving: A comprehensive overview and field guide for future research
directions. arXiv preprint arXiv:2112.11561. https://arxiv.org/abs/2112.11561
Tong, S., & Li, S. (2024). Explainable artificial intelligence for urban planning: Challenges,
solutions, and future trends from a new perspective. International Journal of Advanced
Computer
Science
and
Applications,
15(7).
https://doi.org/10.14569/IJACSA.2024.0150701
Akman, A., Sun, Q., & Schuller, B. W. (2024). Explainable artificial intelligence for medical
applications:
A
review.
arXiv
preprint
arXiv:2412.01829.
https://arxiv.org/abs/2412.01829
Singh, P., Gohel, P., & Mohanty, M. (2021). Explainable AI: Current status and future
directions. arXiv preprint arXiv:2107.07045. https://arxiv.org/abs/2107.07045
Ghafoor, S. K., Islam, S. R., Eberle, W., & Ahmed, M. (2021). Explainable artificial
intelligence
approaches:
A
survey.
arXiv
preprint
arXiv:2101.09429.
https://arxiv.org/abs/2101.09429
Förster, M., Klier, M., Sigler, I., Brasse, J., & Broder, H. R. (2023). Explainable artificial
intelligence in information systems: A review of the status quo and future research
directions. Electronic Markets, 33(1), 26. https://doi.org/10.1007/s12525-022-00550-7
Wood, J., AlShami, A., Mersha, M., Lam, K., & Kalita, J. (2024). Explainable artificial
intelligence: A survey of needs, techniques, applications, and future direction. arXiv
preprint arXiv:2409.00265. https://arxiv.org/abs/2409.00265
Tsagkaris, K. K., Chatzis, K. K., Plataniotis, K. N., Rodis, N., & Papadopoulos, N. (2023).
Multimodal explainable artificial intelligence: A comprehensive review of
methodological
advances
and
future
research
directions.
arXiv
preprint
arXiv:2306.05731. https://arxiv.org/abs/2306.05731
Liu, S., Datta, A., Gomes, C., & Natraj, L. (2023). A survey of explainable AI and proposal for
a discipline of explanation engineering. arXiv preprint arXiv:2306.01750.
https://arxiv.org/abs/2306.01750
Narayanan, P. J., & Gupta, A. (2024). A survey on concept-based approaches for model
improvement. arXiv preprint arXiv:2403.14566. https://arxiv.org/abs/2403.14566
Shaalan, K., Basiouni, A., & Abdelqader, K. (2024). Unlocking the future: Systematic review
of the progress and challenges in explainable artificial intelligence (XAI). SSRN
Electronic Journal. https://doi.org/10.2139/ssrn.1234567
Kumar, S., Gupta, S., Shah, V., Sundararajan, P., & Raj, A. (2022). XAI in financial systems:
Opportunities
and
challenges.
AI
and
Society,
38(3),
1257–1274.
https://doi.org/10.1007/s00146-022-01395-4
Patel, N., Sharma, S., Chauhan, D., Kaur, H., & Verma, R. (2023). Human-centric XAI:
Designing for comprehensibility. ACM Transactions on Interactive Intelligent Systems,
12(4), Article 1–24. https://doi.org/10.1145/3579946
Gupta, A., Rani, P., Bansal, N., Sharma, R., & Singh, M. (2022). Exploring explainable
artificial intelligence in education. Computers & Education, 183, 104523.
https://doi.org/10.1016/j.compedu.2022.104523
Martinez, P., Perez, R., Castro, L., Fernandez, D., & Rodriguez, J. (2023). Legal applications
of
explainable
artificial
intelligence.
AI
&
Law,
30,
45–72.
https://doi.org/10.1007/s10506-023-09322-0
Wang, M., Zhao, L., Chen, Q., Zhang, Y., & Li, H. (2023). Multimodal explainable artificial
intelligence: A comprehensive review. Journal of Artificial Intelligence Research, 78,
523–567.
Sinha, S., Desai, P., Sharma, R., Singh, A., & Reddy, V. (2023). Explainable AI for
cybersecurity: Techniques, challenges, and opportunities. Computers & Security, 131,
https://doi.org/10.1016/j.cose.2023.102567
Patel, L., Brown, G., Miller, D., Jones, M., & Cooper, E. (2023). Cultural perspectives on
explainable AI. AI and Ethics, 2(2), 87–101. https://doi.org/10.1007/s43681-023-00234-
7
Islam, Z., Khan, B., Ali, N., Rahman, M., & Ahmed, F. (2023). Lightweight explainable AI
models for IoT applications. IEEE Internet of Things Journal, 10(1), 873–885.
https://doi.org/10.1109/JIOT.2022.3209876
Clarke, A., Patel, S., Walker, D., Meyer, T., & Thomson, J. (2024). Standardizing XAI
evaluation: Towards interoperable explainability metrics. Journal of Artificial
Intelligence Research, 81, 123–152. https://doi.org/10.1613/jair.1.14123
Wei, Z., Wang, Y., Wang, Y., & Mei, Q. (2020). Densely connected deep neural network
considering connectivity of pixels for automatic crack detection. arXiv preprint
arXiv:2006.00093. https://arxiv.org/abs/2006.00093
Aggarwal, C. C. (2023). Neural networks and deep learning: A textbook. Springer.
https://doi.org/10.1007/978-3-031-40847-8
Anwar, M. S., Usama, M., & Imran, M. (2023). Deep learning: Systematic review, models,
challenges, and research directions. Neural Computin