http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Shangwei Zhao,Jingcheng Wang,Haotian Xu,Hongyuan Wang 제어·로봇·시스템학회 2022 International Journal of Control, Automation, and Vol.20 No.4
In this paper, an approximate dynamic programming (ADP)-based approach is developed to handle the robust optimal tracking control problem for switched systems with uncertainties in the finite horizon. The switched systems with unknown matched uncertainties are formulated by virtue of system dynamics and reference trajectory, where the complicated tracking problem is converted to a stabilizing robust optimal control problem. To avoid the requirement of system dynamics knowledge, a neural network (NN)-based identifier is utilized to estimate the unknown switched systems dynamics. The actor-critic NNs are constructed to approximate the optimal control input and the corresponding performance index, where the weights are trained backward-in-time in an off-line manner. Benefiting from the Lipschitz continuous condition, the convergence of the proposed approach is proved, which illustrates the iteration approach will converge to the unique solution under a small enough sampling time interval. Finally, two numerical simulation cases are employed to verify the effectiveness of the proposed approach.
Jiahui Xu,Jingcheng Wang,Jun Rao,Yanjiu Zhong,Shangwei Zhao 제어·로봇·시스템학회 2022 International Journal of Control, Automation, and Vol.20 No.9
Recent achievements in the field of adaptive dynamic programming (ADP), as well as the data resources and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control technologies. This paper proposes a twin deterministic policy gradient adaptive dynamic programming (TDPGADP) algorithm to solve the optimal control problem for a discrete-time affine nonlinear system in a modelfree scenario. To solve the overestimation problem resulted from function approximation errors, the minimum value between the double Q network is taken to update the control policy. The convergence of the proposed algorithm in which the value function is served as the Lyapunov function is verified. By designing a twin actor-critic network structure, combining the target network and a specially designed adaptive experience replay mechanism, the algorithm is convenient to implement and the sample efficiency of the learning process can be improved. Two simulation examples are conducted to verify the efficacy of the proposed method.