ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Novel First Order Bayesian Optimization with an Application to Reinforcement Learning

J, PK and Penubothula, S and Kamanchi, C and Bhatnagar, S (2020) Novel First Order Bayesian Optimization with an Application to Reinforcement Learning. In: Applied Intelligence .

[img] PDF
app_int_2020.pdf - Published Version
Restricted to Registered users only

Download (2MB) | Request a copy
Official URL: https://dx.doi.org/10.1007/s10489-020-01896-w

Abstract

Zeroth Order Bayesian Optimization (ZOBO) methods optimize an unknown function based on its black-box evaluations at the query locations. Unlike most optimization procedures, ZOBO methods fail to utilize gradient information even when it is available. On the other hand, First Order Bayesian Optimization (FOBO) methods exploit the available gradient information to arrive at better solutions faster. However, the existing FOBO methods do not utilize a crucial information that the gradient is zero at the optima. Further, the inherent sequential nature of the FOBO methods incur high computational cost limiting their wide applicability. To alleviate the aforementioned difficulties of FOBO methods, we propose a relaxed statistical model to leverage the gradient information that directly searches for points where gradient vanishes. To accomplish this, we develop novel acquisition algorithms that search for global optima effectively. Unlike the existing FOBO methods, the proposed methods are parallelizable. Through extensive experimentation on standard test functions, we compare the performance of our methods over the existing methods. Furthermore, we explore an application of the proposed FOBO methods in the context of policy gradient reinforcement learning. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.

Item Type: Journal Article
Publication: Applied Intelligence
Publisher: Springer
Additional Information: copyright to this article belongs to Springer
Keywords: Artificial intelligence, Bayesian optimization; Computational costs; Globaloptimum; Gradient informations; Optimization procedures; Policy gradient reinforcement learning; Standard test functions; Statistical modeling, Reinforcement learning
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 11 Dec 2020 10:33
Last Modified: 11 Dec 2020 10:33
URI: http://eprints.iisc.ac.in/id/eprint/66847

Actions (login required)

View Item View Item