Statistical testing can be framed as a repetitive game between two players, Forecaster and Sceptic. On each round, Forecaster sets prices for various gambles, and Sceptic chooses which gambles to make. If Sceptic multiplies by a large factor the capital he puts at risk, he has evidence against Forecaster’s ability. His capital at the end of each round is a measure of his evidence against Forecaster so far. This can go up and then back down. If you report the maximum so far instead of the current value, you are exaggerating the evidence against Forecaster. In this article, we show how to remove the exaggeration. Removing it means systematically reducing the maximum in such a way that a rival to Sceptic can always play so as to obtain current evidence as good as Sceptic’s reduced maximum. We characterize the functions that can achieve such reductions. Because these functions may impose only modest reductions, we think of our result as a method of insuring against loss of evidence. In the context of an actual market, it is a method of insuring against the loss of what an investor has gained so far.
The authors consider the Ito stochastic differential equation
with scalar Brownian motion W and a locally bounded measurable function f. Expressing the solution X in terms of the classical geometric Brownian motion, it can be proved that for a positive initial segment (X(s),-r≤s≤0) and non-negative f, the process X remains positive a.s. On the other hand, the authors establish a condition on a, σ and f such that the solution process with positive initial condition attains zero in finite time a.s. This condition is for instance satisfied if f is non-increasing with at least linear growth while a and σ are arbitrary.
Given a Brownian motion B, we consider the so-called statistical Skorohod embedding problem of recovering the distribution of an independent random time T based on i.i.d. sample from BT. We propose a consistent estimator for the density of T, derive its convergence rates and prove their optimality.