?
От неизвестности к прозрачности: обзор технологий объяснимого ИИ (XAI)
With the rapid advancement of artificial intelligence, and deep learning in particular, models have emerged that are capable of delivering highly accurate predictions. However, the internal logic of such models remains difficult to interpret—an issue of critical importance, especially in domains where the correctness of an algorithm directly affects high-stakes decision-making. One promising avenue for addressing this challenge is Explainable Artificial Intelligence (XAI), which focuses on developing approaches that clarify model behavior and provide transparent reasoning behind the results obtained. This work examines theoretical foundations of XAI, with particular attention to the classification of methods and the challenges posed by the "black box" nature of machine learning models. The review highlights the necessity of advancing new XAI techniques, outlines potential ways to reconcile high predictive accuracy with sufficient interpretability, and lays the groundwork for further research in this field.