Highlighted Research Contributions

Recovery of Bayesian Posteriors with Generative AI

Under this theme, we leverage generative models (normalizing flows in particular) to generate posteriors arising from complex statistical models. We provide theoretical guarantees of the credible coverage while ensuring scalable implementation with variational inference.

Key Papers

Jiefu Zhou and Shrijita Bhattacharya, Neural Autoregressive Flows based Variational Bayes Model Averaging, 2026, The American Statistician, accepted [Code].

Sumegha Premchandar, Shrijita Bhattacharya, and Tapabrata Maiti, Normalizing Flows Aided Variational Inference: A Useful Alternative to MCMC?, 2023, Notices of the American Mathematical Society, 70(07) [Code].

Representation Learning with Model Ensembling

Under this theme, we leverage Bayesian model ensembling to learn intrinsic dimension of images. We use the highest posterior density regions to select the hidden dimensionality of an object. To achieve parallelization across models, we implement a variational Bayes approximation. We were able to achieve a 10% compression on MNIST image data and an 18% compression on Fashion-MNIST.

Key Papers

Zihuan Liu, Shrijita Bhattacharya, and Tapabrata Maiti, Variational Bayes Ensemble Learning Neural Networks With Compressed Feature Space, 2022, IEEE Transactions on Neural Networks and Learning Systems, 1-7 [Code].

Vojtech Kejzlar, Shrijita Bhattacharya, Mookyong Son, Tapabrata Maiti, Black box variational Bayesian model averaging, 2022, The American Statistician 0(0) 1-12 [Code].

Low-latency DNNs with Spike-and-Slab

Under this theme, we use spike and slab shrinkage priors like Gaussian, Lasso and Horse-shoe for selecting useful neurons in a data-adaptive approach. We contrast the performance of these three priors in asymptotic performance and time complexity of implementation. We reduced the deployment time of a classification model on Fashion-MNIST dataset by 20% and on CIFAR-10 dataset by 10%.

Key Papers

Sanket Jantre, Shrijita Bhattacharya, Tapabrata Maiti, Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks, 2024, IEEE Transactions on Neural Networks and Learning Systems 36 (6), 11176-11188. [Code].

Sanket Jantre, Shrijita Bhattacharya, Tapabrata Maiti, Layer Adaptive Node Selection in Bayesian Neural Networks: Statistical Guarantees and Implementation Details, 2021, Neural Networks 167, 309-330. [Code].

Binary Graphical Models

Under this theme, we have exploited Ising distributions to capture the graphical structure among binary variables. This allowed for high dimensional variable selection, especially when the predictors share a network structure. A variational Bayes algorithm providing 200X speed over Gibbs sampling was developed.

Key Papers

Siddhartha Nandy, Minwoo Kim, Shrijita Bhattacharya, Tapabrata Maiti, Variational inference aided variable selection for spatially structured high dimensional covariates, 2025, Journal of Computational and Graphical Statistics 35 (1), 482-493 [Code].

Minwoo Kim, Shrijita Bhattacharya, Tapabrata Maiti, Statistically valid variational bayes algorithm for Ising model parameter estimation, 2024, Journal of Computational and Graphical Statistics 33(1), 75-84. [Code].

Scalable Computer Models

Under this theme we developed a variational Bayes and empirical Bayes implementation corresponding to the Kennedy and O’Hagan approach of implementing computer models. A 60X scale up in speed was observed for a simple two-parameter regression problem.

Key Papers

Mookyong Son, Shrijita Bhattacharya, Vojtech Kejzlar, Tapabrata Maiti, Statistical Foundation of Variational Bayes Computer Models, under revision at the Journal of Nonparametric Statistics. [Code].

Vojtech Kejzlar, Mookyong Son, Shrijita Bhattacharya, Tapabrata Maiti, A fast, scalable, and calibrated computer model emulator: an empirical Bayes approach, 2021, Statistics and Computing 31(49), 1-49 [Code].

Outlier Detection with Extreme Value Theory

Under this theme, we developed tail adjusted box-plots to identify outliers in data with tails heavier or lighter than Gaussian. A weighted sequential testing algorithm was implemented which moved sequentially to detect the onset of outlier regime, especially useful in insurance claims or Internet traffic data.

Key Papers

Shrijita Bhattacharya, Francois Kamper, Jan Beirlant, Outlier detection based on extreme value theory and applications, 2023, Scandinavian Journal of Statistics 50 (3), 1466-1502 [Code].

Shrijita Bhattacharya, Michael Kallitsis, Stilian Stoev, Data-adaptive trimming of the Hill estimator and detection of outliers in the extremes of heavy-tailed data, 2019, Electronic Journal of Statistics 13(1), 1872-1925 [Code].

Grants