ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Neuromorphic Time-Multiplexed Reservoir Computing With On-the-Fly Weight Generation for Edge Devices

Gupta, S and Chakraborty, S and Thakur, CS (2022) Neuromorphic Time-Multiplexed Reservoir Computing With On-the-Fly Weight Generation for Edge Devices. In: IEEE Transactions on Neural Networks and Learning Systems, 33 (6). pp. 2676-2685.

[img] PDF
IEEE_tra_neu_net_lea_sys_33-6_2676-2685_2022.pdf - Published Version
Restricted to Registered users only

Download (3MB) | Request a copy
Official URL: https://doi.org/10.1109/TNNLS.2021.3085165

Abstract

The human brain has evolved to perform complex and computationally expensive cognitive tasks, such as audio-visual perception and object detection, with ease. For instance, the brain can recognize speech in different dialects and perform other cognitive tasks, such as attention, memory, and motor control, with just 20 W of power consumption. Taking inspiration from neural systems, we propose a low-power neuromorphic hardware architecture to perform classification on temporal data at the edge. The proposed architecture uses a neuromorphic cochlea model for feature extraction and reservoir computing (RC) framework as a classifier. In the proposed hardware architecture, the RC framework is modified for on-the-fly generation of reservoir connectivity, along with binary feedforward and reservoir weights. Also, a large reservoir is split into multiple small reservoirs for efficient use of hardware resources. These modifications reduce the computational and memory resources required, thereby resulting in a lower power budget. The proposed classifier is validated for speech and human activity recognition (HAR) tasks. We have prototyped our hardware architecture using Intel's cyclone-10 low-power series field-programmable gate array (FPGA), consuming only 4790 logic elements (LEs) and 34.9-kB memory, making it a perfect candidate for edge computing applications. Moreover, we have implemented a complete system for speech recognition with the feature extraction block (cochlea model) and the proposed classifier, utilizing 15 532 LEs and 38.4-kB memory. By using the proposed idea of multiple small reservoirs along with on-the-fly generation of reservoir binary weights, our architecture can reduce the power consumption and memory requirement by order of magnitude compared to existing FPGA models for speech recognition tasks with similar complexity. © 2012 IEEE.

Item Type: Journal Article
Publication: IEEE Transactions on Neural Networks and Learning Systems
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: The copyright for this article belongs to the Institute of Electrical and Electronics Engineers Inc.
Keywords: Budget control; Electric power utilization; Extraction; Feature extraction; Field programmable gate arrays (FPGA); Memory architecture; Object detection; Storms; Time division multiplexing, Audio-visual perceptions; Computing applications; Hardware architecture; Human activity recognition; Neuromorphic hardwares; On-the-fly generation; Proposed architectures; Reservoir connectivities, Speech recognition, brain; computer, Brain; Computers; Neural Networks, Computer
Department/Centre: Division of Electrical Sciences > Electronic Systems Engineering (Formerly Centre for Electronic Design & Technology)
Date Deposited: 29 Sep 2022 11:57
Last Modified: 29 Sep 2022 11:57
URI: https://eprints.iisc.ac.in/id/eprint/76850

Actions (login required)

View Item View Item