On the Posterior Computation Under the Dirichlet-Laplace Prior
On the Posterior Computation Under the Dirichlet-Laplace Prior
Modern applications routinely collect high-dimensional data, leading to statistical models having more parameters than there are samples available. A common solution is to impose sparsity in parameter estimation, often using penalized optimization methods. Bayesian approaches provide a probabilistic framework to formally quantify uncertainty through shrinkage priors. Among these, the Dirichlet-Laplace prior has attained recognition for its theoretical guarantees and wide applicability. This article identifies a critical yet overlooked issue in the implementation of Gibbs sampling algorithms for such priors. We demonstrate that ambiguities in the presentation of key algorithmic steps, while mathematically coherent, have led to widespread implementation inaccuracies that fail to target the intended posterior distribution -- a target endowed with rigorous asymptotic guarantees. Using the normal-means problem and high-dimensional linear regressions as canonical examples, we clarify these implementation pitfalls and their practical consequences and propose corrected and more efficient sampling procedures.
Paolo Onorati、David B. Dunson、Antonio Canale
自然科学研究方法数学
Paolo Onorati,David B. Dunson,Antonio Canale.On the Posterior Computation Under the Dirichlet-Laplace Prior[EB/OL].(2025-07-07)[2025-07-21].https://arxiv.org/abs/2507.05214.点此复制
评论