ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Energy Efficient Sparse Bayesian Learning using Learned Approximate Message Passing

Thomas, CK and Mundlamuri, R and Murthy, CR and Kountouris, M (2021) Energy Efficient Sparse Bayesian Learning using Learned Approximate Message Passing. In: 22nd IEEE International Workshop on Signal Processing Advances in Wireless Communications, 27-30 Sep 2021, Lucca, pp. 271-275.

[img] PDF
IEEE_SPAWC_2021.pdf - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy
Official URL: https://doi.org/10.1109/SPAWC51858.2021.9593220

Abstract

Sparse Bayesian learning (SBL) is a well-studied framework for sparse signal recovery, with numerous applications in wireless communications, including wideband (millimeter wave) channel estimation and user activity detection. SBL is known to be more sparsity-inducing than other priors (e.g., Laplacian prior), and is better able to handle ill-conditioned measurement matrices, hence providing superior sparse recovery performance. However, the complexity of SBL does not scale well with the dimensionality of the problem due to the computational complexity associated with the matrix inversion step in the EM iterations. A computationally efficient version of SBL can be obtained by exploiting approximate message passing (AMP) for the inversion, coined AMP-SBL. However, this algorithm still requires a large number of iterations and careful hand-tuning to guarantee convergence for arbitrary measurement matrices. In this work, we revisit AMP-SBL from an energy-efficiency perspective. We propose a fast version of AMP-SBL leveraging deep neural networks (DNN). The main idea is to use deep learning to unfold the iterations in the AMP-SBL algorithm using very few, no more than 10, neural network layers. The sparse vector estimation is performed using DNN, and hyperparameters are learned using the EM algorithm, making it robust to different measurement matrix models. Our results show a reduction in energy consumption, primarily due to lower complexity and faster convergence rate. Moreover, the training of the neural network is simple since the number of parameters to be learned is relatively small. © 2021 IEEE.

Item Type: Conference Paper
Publication: IEEE Workshop on Signal Processing Advances in Wireless Communications, SPAWC
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: The copyright for this article belongs to Institute of Electrical and Electronics Engineers Inc.
Keywords: Complex networks; Deep neural networks; Energy efficiency; Energy utilization; Matrix algebra; Millimeter waves; Multilayer neural networks; Network layers; Signal reconstruction, Approximate message passing; Bayesian learning; Deep unfolding; Energy efficient; Measurement matrix; Message-passing; Neural-networks; Sparse bayesian; Sparse bayesian learning; Unfoldings, Message passing
Department/Centre: Division of Electrical Sciences > Electrical Communication Engineering
Date Deposited: 01 Feb 2022 12:40
Last Modified: 01 Feb 2022 12:40
URI: http://eprints.iisc.ac.in/id/eprint/71211

Actions (login required)

View Item View Item