Detecting the event of a single photon loss on quantum signals
We design a scheme for detecting a single photon loss from multi-modal quantum signals transmitted via a fiber or in free space. This consists of a special type of unitary coding transformation, the parity controlled-squeezing, applied prior to the transmission on the signal composed by information and ancilla modes. At the receiver, the inverse unitary transformation is applied -decoding, and the ancilla mode is measured via photon detection. The outcome reveals whether a photon loss has occurred. Distortion of the information part of the signal caused by an ancilla photon loss can be corrected via unitary transformation while loss of a photon from the information part of the signal can be detected with the probability exponentially close to unity but cannot be corrected. In contrast to the schemes of decoherence free subspaces and quantum error correction protocols, this method allows one to make use in principle of entire Hilbert space dimensionality. We discuss possible ways of synthesizing the required encoding-decoding transformation.
In the paper, a new construction of the theory of partitions of integers is proposed. The author defines entropy as the natural logarithm of the number of partitions of a number M into natural summands with repetitions allowed p(M) and repetitions forbidden q(M). The passage from ln p(M) to lnq(M) through the mesoscopic values M → 0 is studied. The topological transition from the mesoscopic lower levels of the Bohr–Kalckar construction to the macroscopic levels corresponding to the critical number of neutrons according to the consequence of Einstein’s inequality M <= cNc, where c is determined for the particles of the given atomic nucleus. The role of quantum mechanics in establishing the new world outlook in physics is analyzed. It is pointed out that the main equations of thermodynamics in the volume “Statistical Physics” of the Landau–Lifshits treatise are obtained without appealing to the so-called “three main principles of thermodynamics”. It is also pointed out that Niels Bohr’s liquid model of the nucleus does not involve any interaction of particles in the form of attraction and is based on the presence of a common potential trough for all elements of the nucleus. The author constructs a new approach to thermodynamics, using quantum mechanics and the Earth’s gravitational attraction as a common potential trough.
The volume contains proceedings of the XIII International symposium on problems of redundancy in information and control systems.
This paper concerns new effective method of joint data coding/modulation which may improve energy-efficiency and energy savings of modern wireless transmission systems. The method require a priori knowledge of probability distribution of input data to map them to the modulation symbols in the most efficient way.
The key idea of the proposed methods of Statistical Modulation is to map the most frequent input values into the modulation symbols with the lowest energy. To estimate the benefit we apply the approach to well-known Quadrature Amplitude Modulation (QAM): the most frequent input symbols are mapped to the most frequent QAM constellation pints. As the result, an average energy needed for data transmission is much smaller that allows increasing the distance between QAM constellation points for the same average energy. Therefore better Bit-Error-Rate (BER) is achievable for the same Signal-To-Noise-Ratio (SNR) in comparison with the standard QAM that does not utilizes the probabilities of input symbols.
In our research we have compared new SQAM and traditional QAM modulation (which does not utilizes the probabilities of input symbols) for the case of exponential distribution of input symbols. Our experiments and theoretical calculations shows that SQM for exponential input provides up to 3 dB gain in BER-SNR.
This method may be applied to improve BER-SNR and reduce the power consumption of the whole transmission system. The list of potential application areas includes M2M communications, IIoT, mobile networks and other scenarios that are critical to power consumption, battery life and latency.
Background Reliable and comparable data on causes of death are crucial for public health analysis, but the usefulness of these data can be markedly diminished when the approach to coding is not standardized across territories and/or over time. Because the Russian system of producing information on causes of death is highly decentralized, there may be discrepancies in the coding practices employed across the country. In this study, we evaluate the uniformity of cause-of-death coding practices across Russian regions using an indirect method. Methods Based on 2002–2012 mortality data, we estimate the prevalence of the major causes of death (70 causes) in the mortality structures of 52 Russian regions. For each region-cause combination we measured the degree to which the share of a certain cause in the mortality structure of a certain region deviates from the respective inter-regional average share. We use heat map visualization and a regression model to determine whether there is regularity in the causes and the regions that is more likely to deviate from the average level across all regions. In addition to analyzing the comparability of cause-specific mortality structures in a spatial dimension, we examine the regional cause-of-death time series to identify the causes with temporal trends that vary greatly across regions. Results A high level of consistency was found both across regions and over time for transport accidents, most of the neoplasms, congenital malformations, and perinatal conditions. However, a high degree of inconsistency was found for mental and behavioral disorders, diseases of the nervous system, endocrine disorders, ill-defined causes of death, and certain cardiovascular diseases. This finding suggests that the coding practices for these causes of death are not uniform across regions. The level of consistency improves when causes of death can be grouped into broader diagnostic categories. Conclusion This systematic analysis allows us to present a broader picture of the quality of cause-of-death coding at the regional level. For some causes of death, there is a high degree of variance across regions in the likelihood that these causes will be chosen as the underlying causes. In addition, for some causes of death the mortality statistics reflect the coding practices, rather than the real epidemiological situation.
The dynamics of a two-component Davydov-Scott (DS) soliton with a small mismatch of the initial location or velocity of the high-frequency (HF) component was investigated within the framework of the Zakharov-type system of two coupled equations for the HF and low-frequency (LF) fields. In this system, the HF field is described by the linear Schrödinger equation with the potential generated by the LF component varying in time and space. The LF component in this system is described by the Korteweg-de Vries equation with a term of quadratic influence of the HF field on the LF field. The frequency of the DS soliton`s component oscillation was found analytically using the balance equation. The perturbed DS soliton was shown to be stable. The analytical results were confirmed by numerical simulations.
Radiation conditions are described for various space regions, radiation-induced effects in spacecraft materials and equipment components are considered and information on theoretical, computational, and experimental methods for studying radiation effects are presented. The peculiarities of radiation effects on nanostructures and some problems related to modeling and radiation testing of such structures are considered.