Advanced Planning of Home Appliances with Consumer’s Preference Learning
For modern energy markets it is typical to use dynamic real-time pricing schemes even for residential customers. Such schemes are expected to stimulate rational energy consumption by the end customers, provide peak shaving and overall energy efficiency. But under dynamic pricing planning a household’s energy consumption becomes complicated. So automated planning of household appliances is a promising feature for developing smart home environments. Such a planning should adapt to individual user’s habits and preferences over comfort to cost balance. We propose a novel approach based on learning customer preferences expressed by a utility function. In the paper an algorithm based on inverse reinforcement learning (IRL) framework is used to infer the user’s hidden utility. We compare IRL-based approach to multiple state-of-the art machine learning techniques and the proposed earlier parametric Bayesian learning algorithm. The training and test datasets are generated by the simulated user’s behavior with different price volatility settings. The goal of the algorithms is to predict a user’s behavior based on the existing history. The IRL and Bayesian approaches showed similar performance and both of them outperforms modern machine learning algorithms such as XGBoost, random forest etc. In particular, the preference learning algorithms significantly better generalize to data generated with parameters different from the training sample. The experiments showed that preference learning approach can be especially useful for smart home automation problems where future situations can be different from those available for training.