Deep active inference in control tasks
odern robotic control tasks are usually solved by applying reinforcement learning techniques. In this paper we show that deep active inference can be applied for creating agents for large and complex environments and show in OpenAI benchmark that deep active inference approach has comparable or better results than modern reinforcement learning algorithms. Active Inference is a framework based on the Free Energy Principle for action and planning in some environment by minimizing Variational Free Energy. The idea is that the agent wants to remain alive and reduce uncertainty, which means that it should avoid surprising or unpreferred states and observations. Active inference proposed as a unifying brain theory but it's implementations unable to handle complex environments. Deep Active Inference algorithm uses deep neural networks to approximate key densities to scale Active Inference to much larger and complex environments.