El Rol de la Inteligencia Artificial Explicable: Revisión Sistemática de sus Tendencias y Líneas de Investigación Emergentes The Role of Explainable Artificial Intelligence: A Systematic Review of Emerging Trends and Research Lines
Contenido principal del artículo
Resumen
Este análisis bibliométrico sobre inteligencia artificial explicable (XAI) revela un crecimiento significativo en la producción científica, liderado por países como India, China y EE. UU., y con una fuerte colaboración institucional entre universidades asiáticas y de Medio Oriente. Las palabras clave más frecuentes reflejan un enfoque consolidado en machine learning, deep learning, interpretabilidad y modelos explicables como SHAP y LIME. Las principales líneas de investigación incluyen la explicabilidad en sistemas críticos, redes neuronales interpretables y la transparencia algorítmica. La red de colaboración internacional destaca la centralidad de países como EE. UU., China, Corea del Sur y Arabia Saudita. En conjunto, los hallazgos subrayan que XAI no solo es un campo en expansión, sino también esencial para el desarrollo de sistemas de IA responsables, confiables y centrados en el ser humano
Descargas
Detalles del artículo

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial 4.0.
Citas
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99. https://doi.org/10.1016/j.inffus.2023.101805
Alt, B., Zahn, J., Kienle, C., Dvorak, J., May, M., Katic, D., Jäkel, R., Kopp, T., Beetz, M., & Lanza, G. (2024). Human-AI Interaction in Industrial Robotics: Design and Empirical Evaluation of a User Interface for Explainable AI-Based Robot Program Optimization. Procedia CIRP, 130, 591–596. https://doi.org/10.1016/j.procir.2024.10.134
Baas, J., Schotten, M., Plume, A., Côté, G., & Karimi, R. (2020). Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quantitative Science Studies, 1(1), 377–386. https://doi.org/10.1162/qss_a_00019
Cação, J., Santos, J., & Antunes, M. (2025). Explainable AI for industrial fault diagnosis: A systematic review. In Journal of Industrial Information Integration (Vol. 47). Elsevier B.V. https://doi.org/10.1016/j.jii.2025.100905
Famiglini, L., Campagner, A., Barandas, M., La Maida, G. A., Gallazzi, E., & Cabitza, F. (2024). Evidence-based XAI: An empirical approach to design more effective and explainable decision support systems. Computers in Biology and Medicine, 170. https://doi.org/10.1016/j.compbiomed.2024.108042
Gipiškis, R., Tsai, C. W., & Kurasova, O. (2024). Explainable AI (XAI) in image segmentation in medicine, industry, and beyond: A survey. In ICT Express (Vol. 10, Issue 6, pp. 1331–1354). Korean Institute of Communications and Information Sciences. https://doi.org/10.1016/j.icte.2024.09.008
Grandi, F., Zanatto, D., Capaccioli, A., Napoletano, L., Cavallaro, S., & Peruzzini, M. (2024). A methodology to guide companies in using Explainable AI-driven interfaces in manufacturing contexts. Procedia Computer Science, 232, 3112–3120. https://doi.org/10.1016/j.procs.2024.02.127
Kügler, P., Dworschak, F., Schleich, B., & Wartzack, S. (2023). The evolution of knowledge-based engineering from a design research perspective: Literature review 2012–2021. Advanced Engineering Informatics, 55, 101892. https://doi.org/10.1016/j.aei.2023.101892
Mosha, N. F., & Ngulube, P. (2023). Metadata Standard for Continuous Preservation, Discovery, and Reuse of Research Data in Repositories by Higher Education Institutions: A Systematic Review. In Information (Switzerland) (Vol. 14, Issue 8). Multidisciplinary Digital Publishing Institute (MDPI). https://doi.org/10.3390/info14080427
Nikiforidis, K., Kyrtsoglou, A., Vafeiadis, T., Kotsiopoulos, T., Nizamis, A., Ioannidis, D., Votis, K., Tzovaras, D., & Sarigiannidis, P. (2025). Enhancing transparency and trust in AI-powered manufacturing: A survey of explainable AI (XAI) applications in smart manufacturing in the era of industry 4.0/5.0. In ICT Express (Vol. 11, Issue 1, pp. 135–148). Korean Institute of Communications and Information Sciences. https://doi.org/10.1016/j.icte.2024.12.001
Peña-Cáceres, O., Garay-Silupu, E., Aguilar-Chuquizuta, D., & Silva-Marchan, H. (2025). Research Trends and Networks in Self-Explaining Autonomous Systems: A Bibliometric Study. In Computers, Materials and Continua (Vol. 84, Issue 2, pp. 2151–2188). Tech Science Press. https://doi.org/10.32604/cmc.2025.065149
Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263. https://doi.org/10.1016/j.knosys.2023.110273
Thawon, I., Suttakul, P., Wanison, R., Mona, Y., Tippayawong, K. Y., & Tippayawong, N. (2025). Integrating explainable artificial intelligence in machine learning models to enhance the interpretation of elastic behaviors in three-dimensional-printed triangular lattice plates. Engineering Applications of Artificial Intelligence, 144. https://doi.org/10.1016/j.engappai.2025.110148