Average reward actor-critic with deterministic policy search
Material type: BookLanguage: en Publication details: Bangalore : IISc , 2023 .Description: viii, 143p. col. ill. ; 29.1 cm * 20.5 cm e-Thesis 3.477MbDissertation: MTech (Res); 2023; Computer science and automationSubject(s): DDC classification:- 600 NAM
Item type | Current library | Call number | Status | Date due | Barcode |
---|---|---|---|---|---|
E-BOOKS | JRD Tata Memorial Library | 600 NAM (Browse shelf(Opens below)) | Available | ET00188 |
include bibliographic reference and index
MTech (Res); 2023; Computer science and automation
The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.
There are no comments on this title.