Adaptive Deep Reinforcement Learning for Robust Control of Uncertain Dynamic Systems

Authors

  • Zhe Liu School of Automation Science and Electrical Engineering, Beihang University
  • Kevin M. Reynolds Department of Aerospace Engineering, Texas A&M University
  • Bo Li Department of Mechanical Engineering, The Hong Kong Polytechnic University

DOI:

https://doi.org/10.9999/ijair.v1i1.3

Keywords:

uncertain systems; adaptive control; robust control; deep reinforcement learning; actor–critic.

Abstract

Uncertainty is a defining feature of many control problems: parameters drift, actuators saturate or slow down, and disturbances do not follow a single tidy model. When the mismatch between a nominal model and the real plant grows, classical designs that work well in a narrow envelope can lose tracking quality or violate safety limits. This paper examines adaptive deep reinforcement learning for robust control of uncertain nonlinear systems. The control task is posed as a constrained continuous-control problem. We train an actor–critic policy over a family of randomized dynamics and augment the observa- tion with lightweight identification cues extracted from short histories of state and input. At execution time, a small safety layer enforces hard command bounds. Across several uncertain benchmark systems, the resulting controller shows improved ro- bustness to parameter drift and disturbance bursts, with lower violation rates than fixed-gain baselines. We also report sensitivity studies (randomization width, observation latency, and history length) and summarize engineering lessons that matter for deployment.

Downloads

Published

2026-01-30 — Updated on 2026-01-30

Versions

How to Cite

Liu, Z., Reynolds, K. M., & Li, B. (2026). Adaptive Deep Reinforcement Learning for Robust Control of Uncertain Dynamic Systems. International Journal of Artificial Intelligence Research, 1(1). https://doi.org/10.9999/ijair.v1i1.3