Interpretable Machine Learning in Social Sciences: Use Cases and Limitations
The increasing use of intelligent technologies, the development and implementation of machine learning systems in various spheres of life require explaining machine learning-based decisions in such systems. This need for interpretation leads to the increasing development of new methods for interpreting machine learning models and their more intense use in real systems. The paper reviews existing studies with applications of the interpretable machine learning (IML) methods in social sciences and summarizes results using bibliometric analysis. In total, seven research topics were described based on 210 papers. Moreover, the paper discusses the opportunities, limitations, and challenges of the interpretable machine learning approach in social science research.