?
Explainable AI for Industry 5.0: Shedding light on the black box
The rapid development of artificial intelligence (AI) is accompanied by increasing computational
complexity and decreasing model transparency, which significantly limits its adoption in critical
domains that require a high level of trust, interpretability, and justification of decisions. Under these
conditions, the field of Explainable Artificial Intelligence (XAI) has gained particular importance as it
focuses on approaches and technologies that enable understanding of AI system logic and interpretation
of their outputs. This article examines the timely topic of implementing XAI in the context of Industry
5.0. Special attention is given to practical application scenarios: the authors present concrete industrial
cases from IBM, Siemens, and other companies demonstrating how XAI contributes to enhancing
the reliability, safety, efficiency, and trustworthiness of AI systems. The study includes a systematic
search and analysis of the literature in this domain and proposes well-grounded key criteria for
comparing existing XAI approaches. The article also outlines the advantages, current limitations, and
promising directions for the development of XAI, highlighting the opportunities it opens for improving
effectiveness, transparency, and trust in business.