Inspired by human learning mechanisms, a novel meta-heuristic algorithm named human learning optimization (HLO) is presented in this paper in which the individual learning operator, social learning operator, random exploration learning operator and re-learning operator are developed to generate new solutions and search for the optima by mimicking the human learning process. Then HLO is applied to solve the well-known 5.100 and 10.100 multi-dimensional knapsack problems from the OR-library and the performance of HLO is compared with that of other meta-heuristics collected from the recent literature. The experimental results show that the presented HLO achieves the best performance in comparison with other meta-heuristics, which demonstrates that HLO is a promising optimization tool.
Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression–perplexity balance are obtained by matrix decomposition techniques.