### Book chapter

## Introduction to the Theory of Randomized Machine Learning

We propose a new machine learning concept called Randomized Machine Learning, in which model parameters are assumed random and data are assumed to contain random errors. Distinction of this approach from “classical” machine learning is that optimal estimation deals with the probability density functions of random parameters and the “worst” probability density of random data errors. As the optimality criterion of estimation, randomized machine learning employs the generalized information entropy maximized on a set described by the system of empirical balances. We apply this approach to text classification and dynamic regression problems. The results illustrate capabilities of the approach.

In this paper we propose a new machine learning concept called randomized machine learning, in which model parameters are assumed random and data are assumed to contain random errors. Distinction of this approach from "classical" machine learning is that optimal estimation deals with the probability density functions of random parameters and the "worst" probability density of random data errors. As the optimality criterion of estimation, randomized machine learning employs the generalized information entropy maximized on a set described by the system of empirical balances. We apply this approach to text classification and dynamic regression problems. The results illustrate capabilities of the approach.

Properties of Erdos measure and the invariant Erdos measure for the golden ratio and all values of the Bernoulli parameter are studies. It is proved that a shift on the two-sided Fibonacci compact set with invariant Erdos measure is isomorphic to the integral automorphism for a Bernoulli shift with countable alphabet. An effective algorithm for calculating the entropy of an invariant Erdos measure is proposed. It is shown that, for certain values of the Bernulli parameter, the algorithm gives the Hausdorff dimension of an Erdos measure to 15 decimal places.

A new approach to network decomposition problems (and, hence, to classification problems, presented in network form) is suggested. Opposite to the conventional approach, consisting in construction of one, “the most correct” decomposition (classification), the suggested approach is focused on construction of a family of classifications. Basing on this family, two numerical indices are introduced and calculated. The suggested indices describe the complexity of the initial classification problem as whole. The expedience and applicability of the elaborated approach are illustrated by two well-known and important cases: political voting body and stock market. In both cases the presented results cannot be obtained by other known methods. It confirms the perspectives of the suggested approach.

Properties of Erdos measure and the invariant Erdos measure for the golden ratio and all values of the Bernoulli parameter are studies. It is proved that a shift on the two-sided Fibonacci compact set with invariant Erdos measure is isomorphic to the integral automorphism for a Bernoulli shift with countable alphabet.

*Proceedings* (ISSN 2504-3900) publishes publications resulting from conferences, workshops and similar events.

In this paper, we consider a large class of hierarchical congestion population games. One can show that the equilibrium in a game of such type can be described as a minimum point in a properly constructed multi-level convex optimization problem. We propose a fast primal-dual composite gradient method and apply it to the problem, which is dual to the problem describing the equilibrium in the considered class of games. We prove that this method allows to find an approximate solution of the initial problem without increasing the complexity.

The formula for calculating the entropy and the Hausdorff dimension of an invariant Erdos measure for the pseudogolden ratio and all values Bernoulli parameter is obtained. This formula make possible calculating the entropy and the Hausdorff dimension with high accuracy.