http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Sabir Hossain,Oualid Doukhi,Yeonho Jo,Deok-Jin Lee 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes- ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.
Real-Time Deep Learning for Moving Target Detection and Tracking Using Unmanned Aerial Vehicle
Oualid Doukhi,Sabir Hossain,Deok-Jin Lee(이덕진) 제어로봇시스템학회 2020 제어·로봇·시스템학회 논문지 Vol.26 No.5
Real-time object detection and tracking are crucial for many applications such as observation and surveillance, and search-and-rescue. There have been many advancements in deep learning techniques for object detection and tracking due to the successful development of computing devices. Based on these ideas, the YOLO deep learning visual object detection algorithm was utilized to visually guide the UAV to track the detected target. The detected target bounding box and the image frame center were the main parameters that were used to control the forward motion, heading, and altitude of the vehicle. The proposed control system approach consisted of two PID controllers that managed the heading and altitude rates. For a real-time computing device a Nvidia Jetson TX2 based edge-computing module is used, which takes the input data from onboard sensors such as camera. A navigation system operated entirely onboard the UAV in the absence of external localization sensors or a GPS signal is introduced, and it used a fisheye camera to perform a visual SLAM for localization. The robustness and effectiveness of the proposed deep-learning based target detection and tracking algorithms were verified through various simulation and real-time flight experiments.
Autonomous UAV for Rescue Applications in Unknown Degraded Environments
Oualid Doukhi,Sabir Hossain,Amir Ramezani Dooraki,Jo Yeonho,Deok-Jin Lee 대한기계학회 2021 대한기계학회 춘추학술대회 Vol.2021 No.5
Autonomous Navigation and collision avoidance (ANCA) missions represent a fundamental challenge in the robotics research field as they are usually deployed in dynamic unknown environments to perform specific missions such as rescue and environment exploration, which makes it require a high-level of autonomy and versatile decision-making capabilities. This challenge becomes even more relevant in unmanned aerial vehicles (UAV) platforms due to their limited payload and computational capabilities. This paper presents a fully autonomous aerial robotic solution for executing complex ANCA missions in unstructured unknown indoor or outdoor GPS-denied environments. The proposed system is based on the combination of a complete hardware configuration and a flexible, optimized software architecture that allows the execution of high-level missions in a fully unsupervised manner (i.e., without human intervention). The proposed approach relies on a robust monocular visual-inertial navigation system (MVINS) for full UAV state estimation in GPS-denied conditions. While the UAV is performing the exploration task, a 2D object detector runs in real-time to detect possible targets such as humans and radioactivity signs. Moreover, the detected object location was estimated, and a semantic map is generated, which contains the environment architecture and the location and ID of the detected objects. Keywords Aerial Robot, Autonomous System, Semantic Map, Sensor Fusion