ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Training Sparse Neural Networks

Srinivas, Suraj and Subramanya, Akshayvarun and Babu, R Venkatesh (2017) Training Sparse Neural Networks. In: 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017, 21-26 July 2017, Honolulu, pp. 455-462.

[img]
Preview
PDF
IEEE-CVPRW 2017_2017_455-462_2017 .pdf - Published Version

Download (674kB) | Preview
Official URL: https://doi.org/10.1109/CVPRW.2017.61

Abstract

The emergence of Deep neural networks has seen human-level performance on large scale computer vision tasks such as image classification. However these deep networks typically contain large amount of parameters due to dense matrix multiplications and convolutions. As a result, these architectures are highly memory intensive, making them less suitable for embedded vision applications. Sparse Computations are known to be much more memory efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks which result in highly sparse neural network models.

Item Type: Conference Paper
Publisher: IEEE Computer Society
Additional Information: The copyright for this article belongs to the Authors.
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 09 Jun 2022 05:10
Last Modified: 09 Jun 2022 05:10
URI: https://eprints.iisc.ac.in/id/eprint/73196

Actions (login required)

View Item View Item