• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 10
Sort:
by name
by year
Article
A. Gushchin. Mathematical Methods of Statistics. 2015. Vol. 24. No. 2. P. 110-121.

We consider the problem of testing two composite hypotheses in the minimax setting. To find maximin tests, we propose a new dual optimization problem which has a solution under a mild additional assumption. This allows us to characterize maximin tests in considerable generality. We give a simple example where the null hypothesis and the alternative are strictly separated, however, a maximin test is purely randomized.  

Added: Jun 18, 2015
Article
Gribkova N. Mathematical Methods of Statistics. 2016. Vol. 25. No. 4. P. 313-322.

We establish Cramér type moderate deviation results for heavy trimmed L-statistics; we obtain our results under a very mild smoothness condition on the inversion F −1 (F is the underlying distribution function of i.i.d. observations) near two points, where trimming occurs, we assume also some smoothness of weights of the L-statistic. Our results complement previous work on Cramér type large deviations for trimmed L-statistics [8] and [5].

 

Added: Feb 28, 2020
Article
Nikitin Y. Y., Volkova K. Y. Mathematical Methods of Statistics. 2016. Vol. 25. No. 1. P. 54-66.

We build new tests of composite hypothesis of exponentiality which are functionals of U-empirical measures and which are closely related and inspired by some special property of exponential law. We study limiting distributions, large deviations and asymptotic efficiency of new tests. Most favorable alternatives are described. Finally we reject using our test the hypothesis on exponentiality of the lengths of reigns of Roman emperors which has been actively discussed last years.

Added: Mar 23, 2016
Article
Molchanov S., Zhang Z., Zheng L. Mathematical Methods of Statistics. 2018. Vol. 27. No. 1. P. 60-70.

Modern information theory is largely developed in connection with random elements residing in large, complex, and discrete data spaces, or alphabets. Lacking natural metrization and hence moments, the associated probability and statistics theory must rely on information measures in the form of various entropies, for example, Shannon’s entropy, mutual information and Kullback–Leibler divergence, which are functions of an entropic basis in the form of a sequence of entropic moments of varying order. The entropicmoments collectively characterize the underlying probability distribution on the alphabet, and hence provide an opportunity to develop statistical procedures for their estimation. As such statistical development becomes an increasingly important line of research in modern data science, the relationship between the underlying distribution and the asymptotic behavior of the entropic moments, as the order increases, becomes a technical issue of fundamental importance. This paper offers a general methodology to capture the relationship between the rates of divergence of the entropic moments and the types of underlying distributions, for a special class of distributions. As an application of the established results, it is demonstrated that the asymptotic normality of the remarkable Turing’s formula for missing probabilities holds under distributions with much thinner tails than those previously known.

Added: Nov 15, 2019
Article
Chen L., Davydov Y., Gribkova N. et al. Mathematical Methods of Statistics. 2018. Vol. 27. P. 83-102.

We introduce and explore an empirical index of increase that works in both deterministic and random environments, thus allowing to assess monotonicity of functions that are prone to random measurement errors. We prove consistency of the index and show how its rate of convergence is influenced by deterministic and random parts of the data. In particular, the obtained results suggest a frequency at which observations should be taken in order to reach any pre-specified level of estimation precision.We illustrate the index using data arising from purely deterministic and error-contaminated functions, which may or may not be monotonic.

Added: Feb 28, 2020
Article
Gushchin A. A., Küchler U. Mathematical Methods of Statistics. 2003. Vol. 12. No. 1. P. 31-61.
Added: Oct 7, 2013
Article
Kleptsyna M., Veretennikov A. Mathematical Methods of Statistics. 2016. Vol. 25. No. 3. P. 207-218.

A new result on stability of an optimal nonlinear filter for a Markov chain with respect to small perturbations on every step is established. An  exponential recurrence of the signal is assumed.

Added: Oct 16, 2016
Article
Esaulov D. Mathematical Methods of Statistics. 2013. Vol. 22. No. 4. P. 333-349.

In the paper a new nonparametric generalized M-test for hypotheses about the order of linear autoregression AR(p) is constructed. We also establish robustness of this test in the model of data contamination by independent additive outliers with intensity O(n−1/2). Robustness is formulated in terms of limiting power equicontinuity. Test statistics are constructed with the help of residual empirical processes. We establish the asymptotic uniform linearity of these processes in the defined contamination model.

Added: Oct 19, 2016
Article
Gribkova N., Zitikis R. Mathematical Methods of Statistics. 2017. Vol. 26. P. 267-281.

The ‘beta’ is one of the key quantities in the capital asset pricing model (CAPM). In statistical language, the beta can be viewed as the slope of the regression line fitted to financial returns on the market against the returns on the asset under consideration. The insurance counterpart of CAPM, called the weighted insurance pricing model (WIPM), gives rise to the so-called weighted-Gini beta. The aforementioned two betas may or may not coincide, depending on the form of the underlying regression function, and this has profound implications when designing portfolios and allocating risk capital. To facilitate these tasks, in this paper we develop large-sample statistical inference results that, in a straightforward fashion, imply confidence intervals for, and hypothesis tests about, the equality of the two betas.

Added: Feb 28, 2020
Article
A.A.Gushchin. Mathematical Methods of Statistics. 2016. Vol. 25. No. 4. P. 304-312.
Let (P_i,Q_i), i = 0, 1, be two pairs of probability measures defined on measurable spaces (Ω_i,F_i) respectively. Assume that the pair (P_1,Q_1) is more informative than (P_0,Q_0) for testing problems. This amounts to say that I_f (P_1,Q_1) ≥ I_f (P_0,Q_0), where I_f (·, ·) is an arbitrary fdivergence. We find a precise lower bound for the increment of f-divergences I_f (P_1,Q_1) − I_f (P_0,Q_0) provided that the total variation distances ||Q_1 − P_1|| and ||Q_0 − P_0|| are given. This optimization problem can be reduced to the case where P_1 and Q_1 are defined on the space consisting of four points, and P_0 and Q_0 are obtained from P_1 and Q_1 respectively by merging two of these four points. The result includes the well-known lower and upper bounds for I_f (P,Q) given ||Q − P||.
Added: Dec 29, 2016