ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV with Limited Environment Knowledge

Singla, A and Padakandla, S and Bhatnagar, S (2021) Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV with Limited Environment Knowledge. In: IEEE Transactions on Intelligent Transportation Systems, 22 (1). pp. 107-118.

[img] PDF
iee_tra_int_tra_sys_22-1_107-118_2021 - Published Version

Download (3MB)
Official URL: https://doi.org/10.1109/TITS.2019.2954952

Abstract

This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV. © 2000-2011 IEEE.

Item Type: Journal Article
Publication: IEEE Transactions on Intelligent Transportation Systems
Additional Information: The copyright for this article belongs to Authors
Keywords: Decision making; Navigation; Recurrent neural networks; Reinforcement learning; Robots, Ambient environment; Current-control method; Decorative items; Environment information; Indoor environment; Monocular cameras; Oscillatory motion; Partial observability, Unmanned aerial vehicles (UAV)
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Division of Interdisciplinary Sciences > Robert Bosch Centre for Cyber Physical Systems
Date Deposited: 31 Dec 2021 05:59
Last Modified: 31 Dec 2021 05:59
URI: http://eprints.iisc.ac.in/id/eprint/67682

Actions (login required)

View Item View Item