A combinatorial formula for affine Hall–Littlewood functions via a weighted Brion theorem
We present a new combinatorial formula for Hall–Littlewood functions associated with the affine root system of type (Formula presented.), i.e., corresponding to the affine Lie algebra (Formula presented.). Our formula has the form of a sum over the elements of a basis constructed by Feigin, Jimbo, Loktev, Miwa and Mukhin in the corresponding irreducible representation. Our formula can be viewed as a weighted sum of exponentials of integer points in a certain infinite-dimensional convex polyhedron. We derive a weighted version of Brion’s theorem and then apply it to our polyhedron to prove the formula. © 2016 Springer International Publishing
Algorithmic statistics is a part of algorithmic information theory (Kolmogorov complexity theory) that studies the following task: given a finite object x (say, a binary string), find an `explanation' for it, i.e., a simple finite set that contains x and where x is a `typical element'. Both notions (`simple' and `typical') are defined in terms of Kolmogorov complexity. It is known that this cannot be achieved for some objects: there are some ``non-stochastic'' objects that do not have good explanations. In this paper we study the properties of maximally non-stochastic objects; we call them ``antistochastic''. In this paper, we demonstrate that the antistochastic strings have the following property: if an antistochastic string x has complexity k, then any k bit of information about x are enough to reconstruct x (with logarithmic advice). In particular, if we erase all but k bits of this antistochastic string, the erased bits can be restored from the remaining ones (with logarithmic advice). As a corollary we get the existence of good list-decoding codes with erasures (or other ways of deleting part of the information). Antistochastic strings can also be used as a source of counterexamples in algorithmic information theory. We show that the symmetry of information property fails for total conditional complexity for antistochastic strings.
Antistochastic strings are those strings that do not have any reasonable statistical explanation. We establish the follow property of such strings: every antistochastic string x is “holographic” in the sense that it can be restored by a short program from any of its part whose length equals the Kolmogorov complexity of x. Further we will show how it can be used for list decoding from erasing and prove that Symmetry of Information fails for total conditional complexity.
In algorithmic statistics quality of a statistical hypothesis (a model) P for a data x is measured by two parameters: Kolmogorov complexity of the hypothesis and the probability P(x). A class of models SijSij that are the best at this point of view, were discovered. However these models are too abstract. To restrict the class of hypotheses for a data, Vereshchaginintroduced a notion of a strong model for it. An object is called normal if it can be explained by using strong models not worse than without this restriction. In this paper we show that there are “many types” of normal strings. Our second result states that there is a normal object x such that all models SijSij are not strong for x. Our last result states that every best fit strong model for a normal object is again a normal object.
The paper [Harry Buhrman, Michal Kouck ́, Nikolay Vereshcha- y gin. Randomized Individual Communication Complexity. IEEE Con- ference on Computational Complexity 2008: 321-331] considered com- munication complexity of the following problem. Alice has a bi- nary string x and Bob a binary string y, both of length n, and they want to compute or approximate Kolmogorov complexity C(x|y) of x conditional to y. It is easy to show that deterministic communica- tion complexity of approximating C(x|y) with precision α is at least n − 2α − O(1). The above referenced paper asks what is random- ized communication complexity of this problem and shows that for r- round randomized protocols its communication complexity is at least Ω((n/α)1/r ). In this paper, for some positive ε, we show the lower bound 0.99n for (worst case) communication length of any random- ized protocol that with probability at least 0.01 approximates C(x|y) with precision εn for all input pairs.