RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation

Article Type

Research Article

Publication Title

IEEE Transactions on Emerging Topics in Computational Intelligence

Abstract

Artificial agents are used in autonomous systems such as autonomous vehicles, autonomous robotics, and autonomous drones to make predictions based on data generated by fusing the values from many sources such as different sensors. Malfunctioning of sensors was noticed in the robotics domain. The correct observation from sensors corresponds to the true estimate of the dimension value of the state vector in deep reinforcement learning (DRL). Hence, noisy estimates from these sensors lead to dimensionality impairment in the state. DRL policies have shown to stagger its decision by the wrong choice of action in case of adversarial attack or modeling error. Hence, it is necessary to examine the effect of dimensionality perturbation on neural policy. In this regard, we analyze whether subtle dimensionality perturbation that occurs due to the noise in the source of input at the testing time distracts agent decisions. Also, we propose an RSAC (robust soft actor-critic) approach that uses a noisy state for prediction, and estimates target from nominal observation. We find that the injection of such noisy input during training will not hamper learning. We have done our simulation in the OpenAI gym MuJoCo (Walker2d-V2) environment and our empirical results demonstrate that the proposed approach competes for SAC’s performance and makes it robust to test time dimensionality perturbation.

DOI

10.1109/TETCI.2022.3157003

Publication Date

1-1-2022

This document is currently not available here.

Share

COinS