ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Time recursive control of stochastic dynamical systems using forward dynamics and applications

Mamajiwala, M and Roy, D (2022) Time recursive control of stochastic dynamical systems using forward dynamics and applications. In: International Journal of Mechanical Sciences, 216 .

[img] PDF
int_jou_mec_sci_216_2022.pdf - Published Version
Restricted to Registered users only

Download (1MB) | Request a copy
Official URL: https://doi.org/10.1016/j.ijmecsci.2021.106969


The solution of a stochastic optimal control problem may be associated with that of the Hamilton�Jacobi�Bellman (HJB) equation, which is a second order partial differential equation subject to a terminal condition. When this equation is semilinear and satisfies certain other constraints, it can be solved via a nonlinear version of the Feynman�Kac formula. According to this approach, the solution to the HJB equation can be obtained by simulating an associated pair of partly coupled forward�backward stochastic differential equations. Although an elegant way to interpret and solve a partial differential equation, simulating the system of forward�backward equations can be computationally inefficient. In this work, the HJB equation pertaining to the optimal control problem is reformulated such that instead of the given terminal condition, it is now subject to an appropriate initial condition. In the process, while the total cost associated with the control problem remains unchanged, pathwise solutions may not. Associated with the new partial differential equation, we then derive a set of stochastic differential equations whose solutions move only forward in time. This approach has a significant computational advantage over the original formulation. Moreover, since the forward�backward approach generally requires simulating stochastic differential equations over the current to the terminal time at every step, the integration errors may accumulate and carry forward in estimating the control. This error is particularly high initially since the time of integration is the longest there. The proposed method, numerically implemented for the control of stochastically excited oscillators, is free of such errors and hence more robust. Unsurprisingly, it also exhibits lower sampling variance. © 2021 Elsevier Ltd

Item Type: Journal Article
Publication: International Journal of Mechanical Sciences
Publisher: Elsevier Ltd
Additional Information: The copyright for this article belongs to Elsevier Ltd
Keywords: Dynamical systems; Errors; Integral equations; Nonlinear equations; Optimal control systems; Oscillators (mechanical); Oscillistors; Partial differential equations; Stochastic control systems; Stochastic models, Condition; Control of chaos; Hamilton Jacobi Bellman equation; Ito formula; Lorenz 1963 model; Mechanical oscillators; Nonlinear mechanical oscillator; Recursive control; Stochastic differential equations; Stochastic optimal control, Stochastic systems
Department/Centre: Division of Mechanical Sciences > Civil Engineering
Date Deposited: 20 Jan 2022 06:49
Last Modified: 20 Jan 2022 06:49
URI: http://eprints.iisc.ac.in/id/eprint/70968

Actions (login required)

View Item View Item