Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau
Modern imaging methods rely strongly on Bayesian inference techniques to solve challenging imaging problems. Currently, the predominant Bayesian computation approach is convex optimization, which scales very efficiently to high-dimensional image models and delivers accurate point estimation results. However, in order to perform more complex analyses, for example, image uncertainty quantification or model selection, it is necessary to use more computationally intensive Bayesian computation techniques such as Markov chain Monte Carlo methods. This paper presents a new and highly efficient Markov chain Monte Carlo methodology to perform Bayesian computation for high-dimensional models that are log-concave and nonsmooth, a class of models that is central in imaging sciences. The methodology is based on a regularized unadjusted Langevin algorithm that exploits tools from convex analysis, namely, Moreau--Yoshida envelopes and proximal operators, to construct Markov chains with favorable convergence properties. In addition to scaling efficiently to high-dimensions, the method is straightforward to apply to models that are currently solved by using proximal optimization algorithms. We provide a detailed theoretical analysis of the proposed methodology, including asymptotic and nonasymptotic convergence results with easily verifiable conditions, and explicit bounds on the convergence rates. The proposed methodology is demonstrated with four experiments related to image deconvolution and tomographic reconstruction with total-variation and $\ell_1$ priors, where we conduct a range of challenging Bayesian analyses related to uncertainty quantification, hypothesis testing, and model selection in the absence of ground truth.