Structured Sparsification of Gated Recurrent Neural Networks
Ch. 5938. P. 4989-4996.
Lobacheva E., Chirkova N., Markovich A., Vetrov D.
Vol. 34. , AAAI Press, 2020
, , et al., , in : Advances in Neural Information Processing Systems 33 (NeurIPS 2020). : Curran Associates, Inc., 2020. P. 14650-14662.
Added: February 14, 2021
, , , Scientific Reports 2020 Vol. 10 P. 19134
Computational methods to predict Z-DNA regions are in high demand to understand the functional role of Z-DNA. The previous state-of-the-art method Z-Hunt is based on statistical mechanical and energy considerations about B- to Z-DNA transition using sequence information. Z-DNA CHiP-seq experiment results showed little overlap with Z-Hunt predictions implying that sequence information only is not ...
Added: December 11, 2020
, , , Applied Soft Computing Journal 2019 Vol. 79 P. 354-362
Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention ...
Added: June 12, 2019
, , et al., , in : Advances in Neural Information Processing Systems 33 (NeurIPS 2020). : Curran Associates, Inc., 2020. P. 2375-2385.
Added: October 29, 2020
, , , , in : Workshop on Compact Deep Neural Network Representation with Industrial Applications, Thirty-second Conference on Neural Information Processing Systems. : Montréal : [б.и.], 2018. P. 1-6.
Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. We apply and further develop this approach for gated recurrent architectures. Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM. ...
Added: December 5, 2018
, , et al., , in : Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). Issue W19-43.: Association for Computational Linguistics, 2019. P. 40-48.
Reduction of the number of parameters is one of the most important goals in Deep Learning. In this article we propose an adaptation of Doubly Stochastic Variational Inference for Automatic Relevance Determination (DSVI-ARD) for neural networks compression. We find this method to be especially useful in language modeling tasks, where large number of parameters in ...
Added: November 1, 2019
, , , Системы высокой доступности 2018 Т. 14 № 4 С. 20-22
The article studies the use of machine learning algorithms in solving information security problems, namely, in the construction of next-generation intrusion detection systems (IDS). The main drawbacks of traditional IDS (based on signature rules) are considered and methods for their solution are proposed using the algorithms of machine learning. The article presents new methods of ...
Added: February 26, 2019
, , , , in : Computational Linguistics and Intellectual Technologies. International Conference "Dialogue 2018" Proceedings. : M. : Conference Proceedings Editorial board, 2018. P. 85-95.
Morphological segmentation is an important task of natural language processing as it can significantly improve the processing of unfamiliar and rare words in different tasks that involve text data. In this paper we present datasets in English and Russian for learning and evaluating morphological segmentation algorithms, demonstrate the method based on the sequence to sequence ...
Added: October 9, 2020
, , , in : Proceedings of the III International Conference on Information Technologies and Nanotechnologies (ITNT). : Самара : Новая техника, 2017. P. 649-654.
In this paper, we consider the problem of insufficient runtime and memory-space complexities of contemporary deep convolutional neural networks in the problem of image recognition. A survey of recent compression methods and efficient neural networks architectures is provided. The experimental study is focused on the visual emotion recognition problem. We compare the computational speed and ...
Added: September 8, 2017
Continuous Gesture Recognition from sEMG Sensor Data with Recurrent Neural Networks and Adversarial Domain Adaptation
, , , , in : 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). : IEEE, 2018. P. 1436-1441.
Movement control of artificial limbs has made big advances in recent years. New sensor and control technology enhanced the functionality and usefulness of artificial limbs to the point that complex movements, such as grasping, can be performed to a limited extent. To date, the most successful results were achieved by applying recurrent neural networks (RNNs), ...
Added: January 18, 2019
, , , in : 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI). : IEEE, 2019. Ch. 19. P. 113-116.
In this article, we focus on the isolated voice command recognition for autonomous man-machine and intelligent robotic systems. We propose to create a grammar model for a small testing command set with self-loops for each state to return blank symbols for noise and out-of-vocabulary words. In addition, we use single arc connected beginning and ending ...
Added: October 21, 2019
, , , in : Proceedings of IEEE International Russian Automation Conference (RusAutoCon 2020). : IEEE, 2020. Ch. 110. P. 610-614.
In this paper, we address the problem of detecting small objects on high-quality X-ray imagesusing deep neural networks. We propose to implement the two-stage approach, in which, firstly, input image issplit into partially overlapping blocks to make small objects more discriminative for detection. Secondly, the small blocks are fed into conventional single-shot detectors. These detectors ...
Added: October 3, 2020
, , et al., PeerJ Computer Science 2022 Vol. 8 Article e865
Depth estimation has been an essential task for many computer vision applications, especially in autonomous driving, where safety is paramount. Depth can be estimated not only with traditional supervised learning but also via a self-supervised approach that relies on camera motion and does not require ground truth depth maps. Recently, major improvements have been introduced ...
Added: February 1, 2022
Berlin : Springer, 2019
This two-volume set LNCS 10305 and LNCS 10306 constitutes the refereed proceedings of the 15th International Work-Conference on Artificial Neural Networks, IWANN 2019, held at Gran Canaria, Spain, in June 2019. The 150 revised full papers presented in this two-volume set were carefully reviewed and selected from 210 submissions. The papers are organized in topical sections ...
Added: July 29, 2019
, , , Radio Physics and Radio Astronomy 2017 Т. 22 № 4 С. 270-275
In the process of astronomical observations are collected vast amounts of data. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). Experts classified 83096 individual observations (on the segment of the study July 2012 - October 2013). Over 75% of the sample ...
Added: October 15, 2017
, , et al., , in : Advances in Neural Information Processing Systems 34 (NeurIPS 2021). : Curran Associates, Inc., 2021. P. 21545-21556.
Added: December 29, 2021
, , in : 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2021). : Association for Computational Linguistics, 2021. P. 2679-2689.
Source code processing heavily relies on the methods widely used in natural language processing (NLP), but involves specifics that need to be taken into account to achieve higher quality. An example of this specificity is that the semantics of a variable is defined not only by its name but also by the contexts in which ...
Added: August 31, 2021
, , , / undefined. 2018.
We propose a new Bayesian sparsification technique for gated recurrent architectures that encounters for its recurrent specifics and gated mechanism. Our method eliminates neurons from the model and makes gates constant, not only compressing the network, but also significantly accelerating a forward pass. On the discriminative tasks our method compresses LSTM extremely, so that only ...
Added: October 16, 2018
, , et al., , in : Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). : [б.и.], 2018. P. 1-16.
We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an ...
Added: October 29, 2018
, , , in : 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI). : IEEE, 2021. P. 413-418.
This paper is focused on the finetuning of acoustic models for speaker adaptation goals on a given gender. We pretrained the Transformer baseline model on Librispeech-960 and conducted experiments with finetuning on the gender-specific test subsets. The obtained word error rate (WER) relatively to the baseline is up to 5% and 3% lower on male ...
Added: September 26, 2021
Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations
, , et al., Neural Networks 2023 Vol. 161 P. 242-253
This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any Hölder smooth function up to a given approximation error in Hölder norms in such a way that all weights of this neural network are bounded ...
Added: July 13, 2022
On the Impact of Word Error Rate on Acoustic-Linguistic Speech Emotion Recognition: An Update for the Deep Learning Era
, / Cornell University. Series Computer Science "arxiv.org". 2021.
Text encodings from automatic speech recognition (ASR) transcripts and audio representations have shown promise in speech emotion recognition (SER) ever since. Yet, it is challenging to explain the effect of each information stream on the SER systems. Further, more clarification is required for analysing the impact of ASR's word error rate (WER) on linguistic emotion ...
Added: November 17, 2020
, , et al., , in : Workshop of the 6th International Conference on Learning Representations (ICLR). : International Conference on Learning Representations, ICLR, 2018. P. 1-6.
In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation. We propose a probabilistic model and show that Batch Normalization maximazes the lower bound of its marginalized log-likelihood. Then, according to the new probabilistic model, we design an algorithm which acts consistently during train and test. However, inference becomes computationally inefficient. To ...
Added: October 31, 2018
, , , , in : 1st Workshop on Learning to Generate Natural Language, International Conference on Machine Learning. : [б.и.], 2017. P. 1-8.
Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights. Recently proposed Sparse Variational Dropout (Molchanov et al., 2017) eliminates the majority of the weights in a feed-forward neural network without significant loss of quality. We apply this technique to sparsify recurrent neural ...
Added: October 30, 2018