### Book chapter

## Algorithmic Statistics Revisited

A survey of main results in algorithmic statistics

### In book

In algorithmic statistics quality of a statistical hypothesis (a model) P for a data x is measured by two parameters: Kolmogorov complexity of the hypothesis and the probability P(x). A class of models SijSij that are the best at this point of view, were discovered. However these models are too abstract. To restrict the class of hypotheses for a data, Vereshchaginintroduced a notion of a strong model for it. An object is called normal if it can be explained by using strong models not worse than without this restriction. In this paper we show that there are “many types” of normal strings. Our second result states that there is a normal object x such that all models SijSij are not strong for x. Our last result states that every best fit strong model for a normal object is again a normal object.

Antistochastic strings are those strings that do not have any reasonable statistical explanation. We establish the follow property of such strings: every antistochastic string *x* is “holographic” in the sense that it can be restored by a short program from any of its part whose length equals the Kolmogorov complexity of *x*. Further we will show how it can be used for list decoding from erasing and prove that Symmetry of Information fails for total conditional complexity.

Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.

Algorithmic statistics looks for models of observed data that are good in the following sense: a model is simple (i.e., has small Kolmogorov complexity) and captures all the algorithmically discoverable regularities in the data. However, this idea can not be used in practice as is because Kolmogorov complexity is not computable. In this paper we develop an algorithmic version of algorithmic statistics that uses space-bounded Kolmogorov complexity. We prove a space-bounded version of a basic result from “classical” algorithmic statistics, the connection between optimality and randomness deficiences. The main tool is the Nisan–Wigderson pseudo-random generator. An extended abstract of this paper was presented at the 12th International Computer Science Symposium in Russia (Milovanov 10).

The notion of a strong sufficient statistic was introduced in [N.~Vereshchagin, Algorithmic Minimal Sufficient Statistic Revisited. \emph{Proc. 5th Conference on Computability in Europe}, CiE 2009, LNCS 5635, pp. 478-487]. In this paper, we give a survey of nice properties of strong sufficient statistics and show that there are strings for which complexity of every strong sufficient statistic is much larger than complexity of its minimal sufficient statistic.

We introduce the notion of a strong sufficient statistic for a given data string. We show that strong sufficient statistics have better properties than just sufficient statistics. We prove that there are “strange” data strings, whose minimal strong sufficient statistic have much larger complexity than the minimal sufficient statistic.