• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 3
Sort:
by name
by year
Article
Wang L., Yang R., Ni H. et al. Applied Soft Computing Journal. 2015. Vol. 34. P. 736-743.

Inspired by human learning mechanisms, a novel meta-heuristic algorithm named human learning optimization (HLO) is presented in this paper in which the individual learning operator, social learning operator, random exploration learning operator and re-learning operator are developed to generate new solutions and search for the optima by mimicking the human learning process. Then HLO is applied to solve the well-known 5.100 and 10.100 multi-dimensional knapsack problems from the OR-library and the performance of HLO is compared with that of other meta-heuristics collected from the recent literature. The experimental results show that the presented HLO achieves the best performance in comparison with other meta-heuristics, which demonstrates that HLO is a promising optimization tool.

Added: Jun 24, 2015
Article
Grachev A., Ignatov D. I., Savchenko A. Applied Soft Computing Journal. 2019. Vol. 79. P. 354-362.

Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression–perplexity balance are obtained by matrix decomposition techniques.

Added: Jun 12, 2019
Article
Pardalos P. M., Aydogan E., Karaoglan I. Applied Soft Computing Journal. 2012. Vol. 12. No. 2. P. 800-806.
The aim of this work is to propose a hybrid heuristic approach (called hGA) based on genetic algorithm (GA) and integer-programming formulation (IPF) to solve high dimensional classification problems in linguistic fuzzy rule-based classification systems. In this algorithm, each chromosome represents a rule for specified class, GA is used for producing several rules for each class, and finally IPF is used for selection of rules from a pool of rules, which are obtained by GA. The proposed algorithm is experimentally evaluated by the use of non-parametric statistical tests on seventeen classification benchmark data sets. Results of the comparative study.
Added: Feb 4, 2013