?
Explainable artificial intelligence for smart and ethical healthcare
SmartHealth technologies are evolving rapidly, and the emerging Medicine 5.0 paradigm highlights the need for artificial intelligence that pairs high performance with explainability, transparency, and ethical soundness. However, many neural-network approaches remain “black boxes,” limiting their uptake in clinical practice, where justification and trust are essential. This article reviews their applications in diagnosis, monitoring of chronic conditions, and clinical decision support, with particular attention to semantic and ontological interoperability, user-centered explanations, and the ethics of personalization. We pair a critical review with a proposed hybrid framework for trustworthy explainable artificial intelligence in healthcare that integrates neural representations with logical rules, delivering role-adaptive, interactive explanations for clinicians and patients.