Advances in Intelligent Systems and Computing book series Vol. 1127. Advances in Intelligent Systems, Computer Science and Digital Economics
This book comprises high-quality, refereed research papers presented at the 2019 International Symposium on Computer Science, Digital Economy and Intelligent Systems (CSDEIS2019): The symposium, held in Moscow, Russia, on 4–6 October 2019, was organized jointly by Moscow State Technical University and the International Research Association of Modern Education and Computer Science. The book discusses the state of the art in areas such as computer science and its technological applications; intelligent systems and intellectual approaches; and digital economics and methodological approaches. It is an excellent reference resource for researchers, undergraduate and graduate students, engineers, and management practitioners interested in computer science and its applications in engineering and management.
We study a problem of designing an optimal two-dimensional circularly symmetric convolution kernel (or point spread function (PSF)) with a circular support of a chosen radius R. Such function will be optimal for estimating an unknown signal (image) from an observation obtained through a convolution-type distortion with the additive random noise. This technique is then generalized to the case of an imprecisely known or random PSF of the measurement distortion. It is shown that the construction of the optimal convolution kernel reduces to a one-dimensional Fredholm equation of the first or a second kind on the interval [0,R]. If the reconstruction PSF is sought in a finite-dimensional class of functions, the problem naturally reduces to a finite-dimensional optimization problem or even a system of linear equations. We also analyze how reconstruction quality depends on the radius of the convolution kernel. It allows finding a good balance between computational complexity and quality of the image reconstruction.
Procedures of sequential updating of information are important for “big data streams” processing because they avoid accumulating and storing large data sets. As a model of information accumulation, we study the Bayesian updating procedure for linear experiments. Analysis and gradual transformation of the original processing scheme in order to increase its efficiency lead to certain mathematical structures - information spaces. We show that processing can be simplified by introducing a special intermediate form of information representation. Thanks to the rich algebraic properties of the corresponding information space, it allows unifying and increasing the efficiency of the information updating. It also leads to various parallelization options for inherently sequential Bayesian procedure, which are suited for distributed data processing platforms, such as MapReduce. Besides, we will see how certain formalization of the concept of information and its algebraic properties can arise simply from adopting data processing to big data demands. Approaches and concepts developed in the paper allow to increase efficiency and uniformity of data processing and present a systematic approach to transforming sequential processing into parallel.