http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Automatic Generation of a Simulated Robot from an Ontology-Based Semantic Description
Yuri Goncalves Rocha,Sung-Hyeon Joo,Eun-Jin Kim,Tae-Yong Kuc 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10
Humans are capable of generating simulated mental worlds based on their past experiences and use such an environment for prospecting, planning, and learning. Such capabilities could enhance current robotic systems, allowing them to plan ahead based on predicted outputs, and even compare their performance with a different agent. In this work, we propose a semantic robot modeling framework, which is able to express intrinsic semantic knowledge in order to better represent the robot and its surrounding environment. We also show that such data can be used to automatically generate a simulated model, allowing robots to simulate themselves and other modeled agents.
Object Removal and Inpainting from Image using Combined GANs
Jeongwon Pyo,Yuri Goncalves Rocha,Arpan Ghosh,Kwanghee Lee,Gungyo In,Taeyoung Kuc 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
As recent research on deep learning methods has been actively conducted, a number of deep learning methods have been proposed. In this paper, we propose a method of removing the desired object from an image using generative adversarial networks(GANs) structure. We composed the network in which two GANs are fused. The first GAN erases the target object from the input image, and the second GAN generates an image that fills the empty space with the background. Through this network, we can erase the desired object from the input image and get an image with the erased part filled with the background without any object detection method. We show that the removal of people and vehicles from images of roads using the CityScapes Dataset.
TOSM-Based Scene Encoding Using a Semantic Descriptor
Hyun-Uk Lee,Yuri Goncalves Rocha,Sung-Hyeon Joo,Sang-Hyeon Bae,Sumaira Manzoor,Tae-Yong Kuc 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10
Both semantic mapping and scene understanding methods can be improved by using an information rich encoding able to subsume the sensorial, topological and semantic data. The triplet ontological semantic model was originally inspired by cognitive science findings and tries to mimic the brain storage and mapping capabilities. In this work, we propose a method to encode a scene into the triplet ontological semantic model representation using only a RGB-D image of the scene as input. We combine a state-of-the-art deep neural network with simple yet information-rich semantic descriptor to encode the scene data.