http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Edwin Valarezo Anazco,Patricio Rivera Lopez,Na Hyeon Park(박나현),Ji Heon Oh(오지헌),Ga Hyeon Ryu(류가현),Tae-Seong Kim(김태성) 한국통신학회 2021 한국통신학회 학술대회논문집 Vol.2021 No.2
Fully autonomous object grasping with robotic hands is under active investigation because autonomous vision and motor control is required. Vision allows a robotic hand interact with the environment by estimating the grasping parameters (i.e., grasping position and orientation) for manipulation. Motor control generates the motion parameters to reach an object and manipulate (e.g., grasping and relocation). In this work, deep learning RGB-D vision is used to detect the object and generate the grasping parameters of position and orientation. An anthropomorphic robotic hand system composed of UR3 robotic arm and qb soft hand is used for motor functions of object grasping and relocation. Our autonomous object manipulation system first detects and locates an object from RGB images using FastRCNN. Then, a partial depth view of the object is generated to estimate the grasping position and orientation of the object. Finally, the robotic hand system is used to grasp and relocate the object. Our autonomous object manipulation system is validated by grasping and relocating a single object of box and ball. For the box, our system achieves 8/10 successful grasping and 7/10 successful relocations, and for the ball 10/10 successful grasping and relocations.