Some Properties of Antistochastic Strings
Algorithmic statistics is a part of algorithmic information theory (Kolmogorov complexity theory) that studies the following task: given a finite object x (say, a binary string), find an `explanation' for it, i.e., a simple finite set that contains x and where x is a `typical element'. Both notions (`simple' and `typical') are defined in terms of Kolmogorov complexity. It is known that this cannot be achieved for some objects: there are some ``non-stochastic'' objects that do not have good explanations. In this paper we study the properties of maximally non-stochastic objects; we call them ``antistochastic''. In this paper, we demonstrate that the antistochastic strings have the following property: if an antistochastic string x has complexity k, then any k bit of information about x are enough to reconstruct x (with logarithmic advice). In particular, if we erase all but k bits of this antistochastic string, the erased bits can be restored from the remaining ones (with logarithmic advice). As a corollary we get the existence of good list-decoding codes with erasures (or other ways of deleting part of the information). Antistochastic strings can also be used as a source of counterexamples in algorithmic information theory. We show that the symmetry of information property fails for total conditional complexity for antistochastic strings.
In algorithmic statistics quality of a statistical hypothesis (a model) P for a data x is measured by two parameters: Kolmogorov complexity of the hypothesis and the probability P(x). A class of models SijSij that are the best at this point of view, were discovered. However these models are too abstract. To restrict the class of hypotheses for a data, Vereshchaginintroduced a notion of a strong model for it. An object is called normal if it can be explained by using strong models not worse than without this restriction. In this paper we show that there are “many types” of normal strings. Our second result states that there is a normal object x such that all models SijSij are not strong for x. Our last result states that every best fit strong model for a normal object is again a normal object.