PaperSwipe

SUP: An Inferable Private Multiple Testing Framework with Super Uniformity

Published 3 days agoVersion 1arXiv:2512.03859

Authors

Kehan Wang, Wenxuan Song, Wangli Xu, Linglong Kong

Categories

stat.ME

Abstract

Multiple testing is widely applied across scientific fields, particularly in genomic and health data analysis, where protecting sensitive personal information is imperative. However, developing private multiple testing algorithms for super uniform $p$-values remains an open question, as privacy mechanisms introduce intricate dependence among the peeled $p$-values and disrupt their super uniformity, complicating post-selection inference. To address this, we introduce a general Super Uniform Private (SUP) multiple testing framework with three key components. First, we develop a novel \( p \)-value transformation that is compatible with diverse privacy regimes while retaining the super uniformity. Next, a reversed peeling algorithm is designed to reduce privacy budgets while facilitating inference. Then, we provide diverse rejection thresholds that are privacy-parameter-free and tailored for different Type-I errors, including the family-wise error rate (FWER) and the false discovery rate (FDR). Building upon these, we advance adaptive techniques to determine the peeling number and boost thresholds. Theoretically, we propose a technique overcoming the post-selection obstacle to Type-I error control, quantify the privacy-induced power loss of SUP relative to its non-private counterpart, and demonstrate that SUP surpasses existing private methods in terms of power. The results of extensive simulations and a real data application validate our theories.

SUP: An Inferable Private Multiple Testing Framework with Super Uniformity

3 days ago
v1
4 authors

Categories

stat.ME

Abstract

Multiple testing is widely applied across scientific fields, particularly in genomic and health data analysis, where protecting sensitive personal information is imperative. However, developing private multiple testing algorithms for super uniform $p$-values remains an open question, as privacy mechanisms introduce intricate dependence among the peeled $p$-values and disrupt their super uniformity, complicating post-selection inference. To address this, we introduce a general Super Uniform Private (SUP) multiple testing framework with three key components. First, we develop a novel \( p \)-value transformation that is compatible with diverse privacy regimes while retaining the super uniformity. Next, a reversed peeling algorithm is designed to reduce privacy budgets while facilitating inference. Then, we provide diverse rejection thresholds that are privacy-parameter-free and tailored for different Type-I errors, including the family-wise error rate (FWER) and the false discovery rate (FDR). Building upon these, we advance adaptive techniques to determine the peeling number and boost thresholds. Theoretically, we propose a technique overcoming the post-selection obstacle to Type-I error control, quantify the privacy-induced power loss of SUP relative to its non-private counterpart, and demonstrate that SUP surpasses existing private methods in terms of power. The results of extensive simulations and a real data application validate our theories.

Authors

Kehan Wang, Wenxuan Song, Wangli Xu et al. (+1 more)

arXiv ID: 2512.03859
Published Dec 3, 2025

Click to preview the PDF directly in your browser