http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
물리적 상호작용 기반의 이족 보행 패턴을 이용한 인간형 로봇의 이동 제어
조현민(Hyun-Min Joe) 제어로봇시스템학회 2021 제어·로봇·시스템학회 논문지 Vol.27 No.9
This paper describes a biped walking pattern for physical interaction between a human and humanoid robot. The study of the physical interaction between robots and humans is an important research field when considering a future society where humans and robots must coexist. In this paper, to move the humanoid robot in the direction intended by the human, the previous method of generating a walking pattern is extended to a method of generating a walking pattern suitable for physical human-robot interaction. When the human applies disturbance to the robot, the robot naturally move in the direction of applied disturbance to recover balance. The applied disturbance is measured by the capture point. To stabilize the perturbed capture point, the desired Zero Moment Point (ZMP) is calculated by using the capture point control. This desired ZMP is tracked by the walking pattern generator. The walking pattern generator automatically calculate the center of mass trajectory, which satisfies the ZMP constraint, by model predictive control. The proposed method was applied to the humanoid robot DRC-HUBO+ and verified that it can be used for physical humanrobot interaction.
박신우 ( Shin-woo Park ),조현민 ( Hyun-min Joe ) 한국센서학회 2024 센서학회지 Vol.33 No.1
Humanoid robots, designed to interact in human environments, require stable mobility to ensure safety. When a humanoid robot falls, it causes damage, breakdown, and potential harm to the robot. Therefore, fall detection is critical to preventing the robot from falling. Prevention of falling of a humanoid robot requires an operator controlling a crane. For efficient and safe walking control experiments, a system that can replace a crane operator is needed. To replace such a crane operator, it is essential to detect the falling conditions of humanoid robots. In this study, we propose falling detection methods using Convolution Neural Network (CNN) model. The image data of a humanoid robot are collected from various angles and environments. A large amount of data is collected by dividing video data into frames per second, and data augmentation techniques are used. The effectiveness of the proposed CNN model is verified by the experiments with the humanoid robot MAX-E1.