ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Event-LSTM: An Unsupervised and Asynchronous Learning-Based Representation for Event-Based Data

Annamalai, L and Ramanathan, V and Thakur, CS (2022) Event-LSTM: An Unsupervised and Asynchronous Learning-Based Representation for Event-Based Data. In: IEEE Robotics and Automation Letters, 7 (2). pp. 4678-4685.

[img] PDF
IEEE_rob_aut_let_7-2_4678-4685_2022.pdf - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy
Official URL: https://doi.org/10.1109/LRA.2022.3151426

Abstract

Event cameras are activity-driven bio-inspired vision sensors that respond asynchronously to intensity changes resulting in sparse data known as events. It has potential advantages over conventional cameras, such as high temporal resolution, low latency, and low power consumption. Given the sparse and asynchronous spatio-temporal nature of the data, event processing is predominantly solved by transforming events into a 2D spatial grid representation and applying standard vision pipelines. In this work, we propose an auto-encoder architecture named as Event-LSTM to generate 2D spatial grid representation. Ours has the following main advantages 1) Unsupervised, task-agnostic learning of 2D spatial grid. Ours is ideally suited for the event domain, where task-specific labeled data is scarce, 2) Asynchronous sampling of event 2D spatial grid. This leads to speed invariant and energy-efficient representation. Evaluations on appearance-based and motion-based tasks demonstrate that our approach yields improvement over state-of-the-art techniques while providing the flexibility to learn spatial grid representation from unlabelled data.

Item Type: Journal Article
Publication: IEEE Robotics and Automation Letters
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: The copyright for this article belongs to the Institute of Electrical and Electronics Engineers Inc.
Keywords: Cameras; Data handling; Energy efficiency; Job analysis; Long short-term memory, Deep learning; Deep learning for visual perception; Deep learning method; Event camera; Features extraction; Learning methods; LSTM; Representation learning; Spatial resolution; Task analysis; Visual perception, Metadata
Department/Centre: Division of Electrical Sciences > Electronic Systems Engineering (Formerly Centre for Electronic Design & Technology)
Date Deposited: 24 Jun 2022 12:09
Last Modified: 24 Jun 2022 12:09
URI: https://eprints.iisc.ac.in/id/eprint/73713

Actions (login required)

View Item View Item