ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

DSLR: Dynamic to Static LiDAR Scan Reconstruction Using Adversarially Trained Autoencoder

Kumar, P and Sahoo, S and Shah, V and Kondameedi, V and Jain, A and Verma, A and Bhattacharyya, C and Viswanathan, V (2021) DSLR: Dynamic to Static LiDAR Scan Reconstruction Using Adversarially Trained Autoencoder. In: 35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2 February 2021 through 9 February 2021, Virtual, pp. 1836-1844.

[img]
Preview
PDF
2105.12774.pdf - Completed Version

Download (17MB) | Preview
Official URL: https://arxiv.org/abs/2105.12774

Abstract

Accurate reconstruction of static environments from LiDAR scans of scenes containing dynamic objects, which we refer to as Dynamic to Static Translation (DST), is an important area of research in Autonomous Navigation. This problem has been recently explored for visual SLAM, but to the best of our knowledge no work has been attempted to address DST for LiDAR scans. The problem is of critical importance due to wide-spread adoption of LiDAR in Autonomous Vehicles. We show that state-of the art methods developed for the visual domain when adapted for LiDAR scans perform poorly. We develop DSLR, a deep generative model which learns a mapping between dynamic scan to its static counterpart through an adversarially trained autoencoder. Our model yields the first solution for DST on LiDAR that generates static scans without using explicit segmentation labels. DSLR cannot always be applied to real world data due to lack of paired dynamic-static scans. Using Unsupervised Domain Adaptation, we propose DSLR-UDA for transfer to real world data and experimentally show that this performs well in real world settings. Additionally, if segmentation information is available, we extend DSLR to DSLR-Seg to further improve the reconstruction quality. DSLR gives the state of the art performance on simulated and real-world datasets and also shows at least 4× improvement. We show that DSLR, unlike the existing baselines, is a practically viable model with its reconstruction quality within the tolerable limits for tasks pertaining to autonomous navigation like SLAM in dynamic environments.

Item Type: Conference Proceedings
Publication: 35th AAAI Conference on Artificial Intelligence, AAAI 2021
Publisher: Association for the Advancement of Artificial Intelligence
Additional Information: The copyright for this article belongs to the Association for the Advancement of Artificial Intelligence
Keywords: Artificial intelligence; Optical radar, Auto encoders; Autonomous navigation; Autonomous Vehicles; Dynamic objects; Real-world; Reconstruction quality; Static environment; Static translation; Visual SLAM; Wide spreads, Learning systems
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 29 May 2022 07:31
Last Modified: 29 May 2022 07:31
URI: https://eprints.iisc.ac.in/id/eprint/72760

Actions (login required)

View Item View Item