Neurocomputational Theories of Homeostatic Control
Homeostasis is a problem for all living agents. It entails predictively regulating internal states within the bounds compatible with survival in order to maximise fitness. This can be achieved physiologically, through complex hierarchies of autonomic regulation, but it must also be achieved via behavioural control, both reactive and proactive. Here we briefly review some of the major theories of homeostatic control and their historical cognates, addressing how they tackle the optimisation of both physiological and behavioural homeostasis. We start with optimal control approaches, setting up key concepts, exploring their strengths and limitations. We then concentrate on contemporary neurocomputational approaches to homeostatic control. We primarily focus on a branch of reinforcement learning known as homeostatic reinforcement learning (HRL). A central premise of HRL is that reward optimisation is directly coupled to homeostatic control. A central construct in this framework is the drive function which maps from homeostatic state to motivational drive, where reductions in drive are operationally defined as reward values. We explain HRL's main advantages, empirical applications, and conceptual insights. Notably, we show how simple constraints on the drive function can yield a normative account of predictive control, as well as account for phenomena such as satiety, risk aversion, and interactions between competing homeostatic needs. We illustrate how HRL agents can learn to avoid hazardous states without any need to experience them, and how HRL can be applied in clinical domains. Finally, we outline several challenges to HRL, and how survival constraints and active inference models could circumvent these problems.