Sanghee Kim: Fast Pathwise Coordinate Descent Algorithm for Penalized Quantile Regression
We develop a new optimization algorithm for high-dimensional LASSO penalized quantile regression problem. As existing algorithms are slow and yields approximate solutions, we address this problem by deriving exact coordinate descent update, utilizing pathwise techniques, and applying KKT condition checks to speed up. We also aim to expand the fraemwork to nonconvex SCAD and MCP penalized quantile regression problems. The R package for this algorithm (QCD) is available at https://github.com/sangheekim96/QCD.
Autotune: fast, efficient, and automatic tuning parameter selection
Tuning parameter selection for penalized regression methods such as LASSO is an important issue in practice, albeit less explored in the literature of statistical methodology. Most common choices include cross-validation (CV), which is computationally expensive, or information criterions such as AIC or BIC, which are known to perform worse in high-dimensional scenarios. Guided by the asymptotic theory of LASSO that connects choice of tuning parameter $\lambda$ to estimation of error standard deviation $\sigma$, we propose \texttt{autotune}, a procedure that alternately maximizes a penalized log-likelihood over regression coefficients $\beta$ and the nuisance parameter $\sigma$, resulting in an automatic tuning algorithm. The core insight behind \texttt{autotune} is that under exact or approximate sparsity conditions, estimation of the scalar nuisance parameter $\sigma$ may often be statistically \& computationally easier than estimation of the high-dimensional regression parameter $\beta$, leading to a gain in efficiency. Using simulated and real data sets, we show that \texttt{autotune} is faster, and provides superior estimation, variable selection, and prediction performance than existing tuning strategies for LASSO as well as alternatives such as the scaled LASSO. Our approach also provides a diagnostic for the sparsity assumption, informing users whether LASSO is suitable for the data at hand or not. For LASSOR package: https://github.com/Tathagata-S/Autotune

Steve Broll: Network-Penalized Variable Selection and Inference for High-Dimensional Longitudinal Omics
Collecting large sets of longitudinal omics variables in small clinical trials is increasingly practicable, necessitating methods for biomarker selection with few subjects and time points. When paired with a time-varying clinical outcome, the problem of biomarker selection becomes one of high-dimensional longitudinal modeling, rather than differential expression. We provide a method, PROLONG, that combines group lasso and empirical graph Laplacian penalties on first-differenced data, increasing power and utilizing the variance across time and between omics features. We extend this model to multiple treatment groups by debiasing a sparse group lasso + Laplacian model and performing inference on the debiased estimator.
Navonil Deb: Regularized estimation of sparse spectral precision matrices
We study the problem of estimating the spectral precision matrix—key to understanding dependencies in high-dimensional time series—common in studying functional connectivity in neuroscience. Challenges arise due to the non-smooth optimization over complex matrices and lack of scalable algorithms that fully exploit sparsity in high dimension. We develop fast pathwise coordinate descent-based algorithms for complex lasso (CLASSO) and graphical lasso (CGLASSO), based on a novel realification via real-complex ring isomorphism. We further introduce CAGLASSO, a scale-adaptive estimator that improves accuracy in heterogeneous settings. Our methods are theoretically grounded with novel high-dimensional consistency results, and we demonstrate strong empirical performance on both simulated data and real fMRI data.
Pre-print: https://arxiv.org/abs/2401.11128
Software: https://github.com/yk748/cxreg/tree/main

Navonil Deb: Counterfactual forecasting for panel data
We address the challenge of forecasting counterfactual outcomes in panel data characterized by missing observations and latent factor structures with temporal dependencies. Such scenarios are common in causal inference, where estimating unobserved potential outcomes is essential. Our method, namely FOCUS, extends traditional matrix completion methods by integrating time series dynamics into the latent factors, enhancing the accuracy of counterfactual predictions. Building upon the estimator proposed by Xiong and Pelger [2023], we accommodate both stochastic and deterministic components within the factors, providing a flexible framework for various applications. In the special case of a stationary autoregressive model for the factors, we derive unit-forecast horizon level error bounds, and also provide confidence intervals for the forecast values. Empirical evaluations demonstrate that FOCUS outperforms existing techniques, such as multivariate singular spectrum analysis [Agarwal et al., 2020a], particularly when latent factors exhibit autoregressive behavior. We apply FOCUS to the HeartSteps V1 mHealth study, illustrating its effectiveness in forecasting step counts for users receiving activity prompts, thereby leveraging temporal patterns in user behavior.

Ha Nguyen: Tuning-free Estimation of Graphical Models
We are developing AutotuneGLASSO, a novel method for automatic tuning of regularization parameters that improves estimation and graph selection of GLASSO using node-specific penalties. Given i.i.d. observations of multivariate normal random vectors, GLASSO estimates the precision matrix of a Gaussian graphical model (GGM) by maximizing the l1-penalized log-likelihood over the space of positive semi-definite matrices. The tuning parameter controls the sparsity of the estimate. Unlike standard GLASSO, which relies on a single global penalty, AutotuneGLASSO adaptively learns a set of node-specific penalties. It does so by augmenting the nodewise Lasso regression step to jointly estimate both regression coefficients and error variances, allowing more flexible and data-driven regularization across nodes. The R package for AutotuneGLASSO is available at https://github.com/hanguyen97/ATTglasso.







