Correcting capabilities of binary irregular LDPC code under low-complexity iterative decoding algorithm
This paper deals with the irregular binary low-density parity-check (LDPC) codes and two iterative low-complexity decoding algorithms. The first one is the majority error-correcting decoding algorithm, and the second one is iterative erasure-correcting decoding algorithm. The lower bounds on correcting capabilities (the guaranteed corrected error and erasure fraction respectively) of irregular LDPC code under decoding (error and erasure correcting respectively) algorithms with low-complexity were represented. These lower bounds were obtained as a result of analysis of Tanner graph representation of irregular LDPC code. The numerical results, obtained at the end of the paper for proposed lower-bounds achieved similar results for the previously known best lower-bounds for regular LDPC codes and were represented for the first time for the irregular LDPC codes.
A modification of the decoding q-ary Sum Product Algorithm (q-SPA) was proposed for the nonbinary codes with small check density based on the permutation matrices. The algorithm described has a vector realization and operates over the vectors defined on the field GF(q), rather than over individual symbols. Under certain code parameters, this approach enables significant speedup of modeling.
Non-orthogonal multiple access schemes are of great interest for next generation wireless systems, as such schemes allow to reduce the total number of resources (frequencies or time slots) in comparison to orthogonal transmission (TDMA, FDMA, CDMA). In this paper we consider an iterative LDPC-based joint decoding scheme suggested in . We investigate the most difficult and important problem where all the users have the same power constraint and the same rate. For the case of 2 users we use a known scheme and analyze it by means of simulations. We found the optimal relation between the number of inner and outer iterations. We further extend the scheme for the case of any number of users and investigated the cases of 3 and 4 users by means of simulations. Finally, we showed, that considered non-orthogonal transmission scheme is more efficient (for 2 and 3 users), than orthogonal transmission.
Fast algorithms for decoding of linear block codes.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Error commission leads to adaptive adjustments in a number of brain networks that subserve goal-directed behavior, resulting in either enhanced stimulus processing or increased motor threshold depending on the nature of errors committed. Here, we studied these adjustments by analyzing post-error modulations of alpha and theta band activity in the auditory version of the two-choice condensation task, which is highly demanding for sustained attention while involves no inhibition of prepotent responses. Errors were followed by increased frontal midline theta (FMT) activity, as well as by enhanced alpha band suppression in the parietal and the left central regions; parietal alpha suppression correlated with the task performance, left central alpha suppression correlated with the post-error slowing, and FMT increase correlated with both behavioral measures. On post-error correct trials, left-central alpha band suppression started earlier before the response, and the response was followed by weaker FMT activity, as well as by enhanced alpha band suppression distributed over the entire scalp. These findings indicate that several separate neuronal networks are involved in post-error adjustments, including the midfrontal performance monitoring network, the parietal attentional network, and the sensorimotor network. Supposedly, activity within these networks is rapidly modulated after errors, resulting in optimization of their functional state on the subsequent trials, with corresponding changes in behavioral measures.
The Corpus of Russian Student Texts (CoRST) is a computational and research project started in 2013 at the Linguistic Laboratory for Corpora Research Technologies at HSE. It comprises a collection of Russian texts written by students from various Russian universities. Its main research goal is to examine language deviations viewed as markers of language change. CoRST is supplied with metalinguistic, morphological and error annotation that enable to customize subcorpora and search by various error types. Its error annotation is based on the modular classification: lexis, grammar and discourse, within which most frequent error phenomena are further distinguished. In total, the error classification encompasses 39 (20 higher-level and 19 lower-level) error tags. The crucial characteristic of CoRST is that the error annotation is multi-layered. Typically, since an error section can be corrected in a few ways, it is annotated with a few error tags respectively. Moreover, the corpus provides search by two possible explanation factors – typo and construction blending. The perspectives of CoRST development have both computational and research aspects, including qualitative and statistical comparative analysis of language phenomena in CoRST and NRC.
A form for an unbiased estimate of the coefficient of determination of a linear regression model is obtained. It is calculated by using a sample from a multivariate normal distribution. This estimate is proposed as an alternative criterion for a choice of regression factors.