We consider the problem of testing two composite hypotheses in the minimax setting. To find maximin tests, we propose a new dual optimization problem which has a solution under a mild additional assumption. This allows us to characterize maximin tests in considerable generality. We give a simple example where the null hypothesis and the alternative are strictly separated, however, a maximin test is purely randomized.

We establish Cramér type moderate deviation results for heavy trimmed *L*-statistics; we obtain our results under a very mild smoothness condition on the inversion *F* −1 (*F* is the underlying distribution function of i.i.d. observations) near two points, where trimming occurs, we assume also some smoothness of weights of the *L*-statistic. Our results complement previous work on Cramér type large deviations for trimmed *L*-statistics [8] and [5].

We build new tests of composite hypothesis of exponentiality which are functionals of U-empirical measures and which are closely related and inspired by some special property of exponential law. We study limiting distributions, large deviations and asymptotic efficiency of new tests. Most favorable alternatives are described. Finally we reject using our test the hypothesis on exponentiality of the lengths of reigns of Roman emperors which has been actively discussed last years.

Modern information theory is largely developed in connection with random elements residing in large, complex, and discrete data spaces, or alphabets. Lacking natural metrization and hence moments, the associated probability and statistics theory must rely on information measures in the form of various entropies, for example, Shannon’s entropy, mutual information and Kullback–Leibler divergence, which are functions of an entropic basis in the form of a sequence of entropic moments of varying order. The entropicmoments collectively characterize the underlying probability distribution on the alphabet, and hence provide an opportunity to develop statistical procedures for their estimation. As such statistical development becomes an increasingly important line of research in modern data science, the relationship between the underlying distribution and the asymptotic behavior of the entropic moments, as the order increases, becomes a technical issue of fundamental importance. This paper offers a general methodology to capture the relationship between the rates of divergence of the entropic moments and the types of underlying distributions, for a special class of distributions. As an application of the established results, it is demonstrated that the asymptotic normality of the remarkable Turing’s formula for missing probabilities holds under distributions with much thinner tails than those previously known.

We introduce and explore an empirical index of increase that works in both deterministic and random environments, thus allowing to assess monotonicity of functions that are prone to random measurement errors. We prove consistency of the index and show how its rate of convergence is influenced by deterministic and random parts of the data. In particular, the obtained results suggest a frequency at which observations should be taken in order to reach any pre-specified level of estimation precision.We illustrate the index using data arising from purely deterministic and error-contaminated functions, which may or may not be monotonic.

A new result on stability of an optimal nonlinear filter for a Markov chain with respect to small perturbations on every step is established. An exponential recurrence of the signal is assumed.

In the paper a new nonparametric generalized M-test for hypotheses about the order of linear autoregression AR(p) is constructed. We also establish robustness of this test in the model of data contamination by independent additive outliers with intensity *O*(*n*−1/2). Robustness is formulated in terms of limiting power equicontinuity. Test statistics are constructed with the help of residual empirical processes. We establish the asymptotic uniform linearity of these processes in the defined contamination model.

The ‘beta’ is one of the key quantities in the capital asset pricing model (CAPM). In statistical language, the beta can be viewed as the slope of the regression line fitted to financial returns on the market against the returns on the asset under consideration. The insurance counterpart of CAPM, called the weighted insurance pricing model (WIPM), gives rise to the so-called weighted-Gini beta. The aforementioned two betas may or may not coincide, depending on the form of the underlying regression function, and this has profound implications when designing portfolios and allocating risk capital. To facilitate these tasks, in this paper we develop large-sample statistical inference results that, in a straightforward fashion, imply confidence intervals for, and hypothesis tests about, the equality of the two betas.