ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

When to Intervene: Learning Optimal Intervention Policies for Critical Events

Venkata, ND and Bhattacharyya, C (2022) When to Intervene: Learning Optimal Intervention Policies for Critical Events. In: 36th Conference on Neural Information Processing Systems, NeurIPS 2022, 28 - 9 December 2022, New Orleans.

[img] PDF
neurips_2022.pdf - Published Version
Restricted to Registered users only

Download (770kB) | Request a copy

Abstract

Providing a timely intervention before the onset of a critical event, such as a system failure, is of importance in many industrial settings. Before the onset of the critical event, systems typically exhibit behavioral changes which often manifest as stochastic co-variate observations which may be leveraged to trigger intervention. In this paper, for the first time, we formulate the problem of finding an optimally timed intervention (OTI) policy as minimizing the expected residual time to event, subject to a constraint on the probability of missing the event. Existing machine learning approaches to intervention on critical events focus on predicting event occurrence within a pre-defined window (a classification problem) or predicting time-to-event (a regression problem). Interventions are then triggered by setting model thresholds. These are heuristic-driven, lacking guarantees regarding optimality. To model the evolution of system behavior, we introduce the concept of a hazard rate process. We show that the OTI problem is equivalent to an optimal stopping problem on the associated hazard rate process. This key link has not been explored in literature. Under Markovian assumptions on the hazard rate process, we show that an OTI policy at any time can be analytically determined from the conditional hazard rate function at that time. Further, we show that our theory includes, as a special case, the important class of neural hazard rate processes generated by recurrent neural networks (RNNs). To model such processes, we propose a dynamic deep recurrent survival analysis (DDRSA) architecture, introducing an RNN encoder into the static DRSA setting. Finally, we demonstrate RNN-based OTI policies with experiments and show that they outperform popular intervention methods.

Item Type: Conference Paper
Publication: Advances in Neural Information Processing Systems
Publisher: Neural information processing systems foundation
Additional Information: The copyright for this article belongs to the Neural information processing systems foundation.
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Division of Interdisciplinary Sciences > Robert Bosch Centre for Cyber Physical Systems
Date Deposited: 20 Jul 2023 10:03
Last Modified: 20 Jul 2023 10:03
URI: https://eprints.iisc.ac.in/id/eprint/82499

Actions (login required)

View Item View Item