?
AI Systems and the Possibility of Moral Decision-making in Smart Cities
Technologies and algorithms combine quantitative data about people and deep insights into the causes of human actions, empowering people to create new artificial agents. The article raises the challenges of implementing normativity in urban spaces equipped with technical systems with artificial intelligence. The improvement of urban life is come through access to technology also the emergence of smart cities is, but not everyone welcomes the predictability and transparency it brings. The introduced artificial agents influence the existing system of normativity in the city, abolishing any norms, transforming, and generating others. Such influence cannot always predict accurately, so it is not possible to expect cultural and religious organizations, individual scientists, or government institutions to finally provide a definite list of moral norms and social rules, which need to implement into technical systems. It is worth noting that if existing norms and regulations are insufficient, then citizens develop their own models of interaction. In addition to rational and effective decision-making, it needs for the technical system needs to earn the trust of the citizens whose daily lives it regulates. The article outlines three caveats for morally correct decision-making mechanisms. First, they should be ongoing measures against distortion. Second, prevention of “social blindness” on the part of developers. Third, controlling the use of techniques that attract users’ attention while being addictive or negatively affecting users in general. These three caveats can be the basis for engineers, social researchers, and urban communities to begin working together conceptually to form morally correct smart city solutions.