http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Elevator button tracking and localization for multi-storey navigation
Arpan Ghosh,Jeong-Won Pyo,Sung-Hyeon Joo,Tae-Yong Kuc 제어로봇시스템학회 2021 제어로봇시스템학회 국제학술대회 논문집 Vol.2021 No.10
Elevator button recognition in an indoor multi-storey environment has been a challenging task amidst the whole scenario of indoor navigation on a mobile robot. In this paper, we integrate various computer vision approaches for the task of button recognition and tracking in an indoor multi-storey environment. To overcome the problem of detecting elevator buttons, we have prepared a framework that uses various preprocessing techniques combined with object detection and tracking approaches to recognize the buttons. Initially, a single-shot object detector YOLOv3 locates the original positions of the target buttons using region over intersection based approach to produce bounding boxes over the required objects. Then we use a part-based tracking algorithm Deep-SORT that follows the detected buttons in realtime to counter the hard movements of the camera. lastly, we take the bounding box coordinate information of the detected buttons and make a semantic map, which can be used to recreate a complete layout of the button panel even with partially detected buttons or a frame consisting of partial button information.
Text and Sign Recognition for Indoor Localization
Arpan Ghosh,Jeongwon Pyo,Tae-Yong Kuc 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
In this paper, we propose a modular approach to estimate the position and rotation of any mobile robot more precisely in an indoor environment using text and sign recognition. The modular approach for the text and sign recognition is performed in a twofold method in figure 1. First is the detection of the region with texts and various signs in the image which is done by an object detection system. The second part is the character recognition, where the detected textual region from the image will be passed onto an optical character recognition engine(OCR) engine to be recognized. This modular approach can be modified at any point based on any mobile robot in an indoor environment with texts and signs to help localize its position and rotation.
Object Removal and Inpainting from Image using Combined GANs
Jeongwon Pyo,Yuri Goncalves Rocha,Arpan Ghosh,Kwanghee Lee,Gungyo In,Taeyoung Kuc 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
As recent research on deep learning methods has been actively conducted, a number of deep learning methods have been proposed. In this paper, we propose a method of removing the desired object from an image using generative adversarial networks(GANs) structure. We composed the network in which two GANs are fused. The first GAN erases the target object from the input image, and the second GAN generates an image that fills the empty space with the background. Through this network, we can erase the desired object from the input image and get an image with the erased part filled with the background without any object detection method. We show that the removal of people and vehicles from images of roads using the CityScapes Dataset.
Kyeong-Jin Joo,Jeong-Won Pyo,Arpan Ghosh,Gun-Gyo In,Tae-Yong Kuc 제어로봇시스템학회 2021 제어로봇시스템학회 국제학술대회 논문집 Vol.2021 No.10
This paper introduces a pallet recognition and rotation measurement algorithm for logistic AGV, Based on YOLO v3 and depth camera. From the camera 2D data is collected and the coordinate system from the image is changed into the corresponding world coordination system with the aid of depth data from RealSense camera. Furthermore, by processing point cloud and RANSAC algorithm, we can get the precise orientation and direction of the pallet. Finally, in the experimental result, the autonomous driving AGV with the proposed algorithm shows the results of path planning, loading, and unloading the pallet. It also shows that the developed algorithm is applicable to autonomous driving of AGV for logistics.
Autonomous Navigation with Active SLAM for Disinfecting Robot
Kyeong-Jin Joo,Sang-Hyun Bae,Jeong-Won Pyo,Arpan Ghosh,Hyun-Jin Park,Tae-Yong Kuc 제어로봇시스템학회 2022 제어로봇시스템학회 국제학술대회 논문집 Vol.2022 No.11
This paper introduces autonomous navigation using the active SLAM for a disinfecting robot. For a safe service and navigation of the disinfecting robot, sensors such as distance sensors and four cameras were set up. To prevent getting stuck between columns and obstacles, in this paper, we generate fake distance data from a LiDAR sensor. We also propose a planner that uses the active SLAM to enable SLAM and path planning simultaneously to generate coordinates that allow the robot to navigate to the goal position. For safe sterilization using UV-C, furthermore, human detection is very crucial because UV-C radiation can be fatal to humans. Therefore, a MobileNetSSD was used to detect humans accurately with 15 FPS for the disinfection robot. Using these approaches, we present autonomous navigation with the A* and DWA algorithms for disinfection. Finally, through experiments, we verified our system of autonomous navigation for the disinfecting robot in a simulation and a real environment. In particular, we confirmed that the disinfecting robot can disinfect effectively through the possibility of sterilization by UVC dosimeters.