Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/149090
Title: Fast Bayesian inference of Sparse Networks with automatic sparsity determination
Authors: Yu, Hang
Wu, Songwei
Xin, Luyin
Dauwels, Justin
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2020
Source: Yu, H., Wu, S., Xin, L. & Dauwels, J. (2020). Fast Bayesian inference of Sparse Networks with automatic sparsity determination. Journal of Machine Learning Research, 21.
Project: 2017-T2-2-126
Journal: Journal of Machine Learning Research
Abstract: Structure learning of Gaussian graphical models typically involves careful tuning of penalty parameters, which balance the tradeoff between data fidelity and graph sparsity. Unfortunately, this tuning is often a “black art” requiring expert experience or brute-force search. It is therefore tempting to develop tuning-free algorithms that can determine the sparsity of the graph adaptively from the observed data in an automatic fashion. In this paper, we propose a novel approach, named BISN (Bayesian inference of Sparse Networks), for automatic Gaussian graphical model selection. Specifically, we regard the off-diagonal entries in the precision matrix as random variables and impose sparse-promoting horseshoe priors on them, resulting in automatic sparsity determination. With the help of stochastic gradients, an efficient variational Bayes algorithm is derived to learn the model. We further propose a decaying recursive stochastic gradient (DRSG) method to reduce the variance of the stochastic gradients and to accelerate the convergence. Our theoretical analysis shows that the time complexity of BISN scales only quadratically with the dimension, whereas the theoretical time complexity of the state-of-the-art methods for automatic graphical model selection is typically a third-order function of the dimension. Furthermore, numerical results show that BISN can achieve comparable or better performance than the state-of-the-art methods in terms of structure recovery, and yet its computational time is several orders of magnitude shorter, especially for large dimensions.
URI: https://jmlr.org/papers/v21/18-514.html
https://hdl.handle.net/10356/149090
ISSN: 1532-4435
Schools: School of Electrical and Electronic Engineering 
School of Physical and Mathematical Sciences 
Rights: © 2020 Hang Yu, Songwei Wu, Luyin Xin, and Justin Dauwels. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/18-514.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Journal Articles

Files in This Item:
File Description SizeFormat 
18-514.pdf1.07 MBAdobe PDFThumbnail
View/Open

Page view(s)

140
Updated on Mar 28, 2024

Download(s) 50

83
Updated on Mar 28, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.