http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
윤상두(Sangdoo Yun),최진영(Jin Young Choi) 대한전자공학회 2015 대한전자공학회 학술대회 Vol.2015 No.6
In this paper, we propose a novel framework to estimate 3D cuboids from RGB-D cameras. Since 3D object recognition models are usually heavy, they have a bottleneck when matching the models to 3D space. To solve this problem, our cuboid estimation is proposed. Since our method can reduce non-object regions of the 3D space efficiently, we can expect the significant speed-up on 3D object recognition. Our framework first learns the distinctive key-points of the objects. Then, we train 3D cuboid voting model, and finally we refine the estimated cuboids by finding the minimal bounding cuboids. The efficiency of our algorithm is demonstrated by the experiments. The quantitative and qualitative experimental results show that our method has a good performance.
윤기민(Kimin Yun),윤상두(Sangdoo Yun),최진영(Jinn Young Choi) 대한전자공학회 2015 대한전자공학회 학술대회 Vol.2015 No.6
In this paper, we propose a group violence detection framework considering motion interaction between objects. Unlike previous works, our method do not need precise object information. We use a field-like interaction feature, and build a normal model through sparsity based learning. Additionally, we measure the continuity of interaction feature field to improve the detection performance. In experiments, our method outperforms the state-of-the-art methods through qualitative and quantitative results.
권민수(MinSu Kwon),윤상두(Sangdoo Yun),최진영(Jin Young Choi) 대한전자공학회 2015 대한전자공학회 학술대회 Vol.2015 No.6
This study investigated in detecting and localizing object(chair) in depth image of indoor scene, which is the important part of understanding indoor scene. We conducted detection in real 3D point cloud to deal with scaling problem and accurate localization which are hard problems in conventional 2D detection. To obtain a robust classification model, we synthesized object’s depth images from the computer graphic model. Also we varied our model with respect to angle, ratio, scale of object for robust detection. In the experiment with some images of NYU V2 dataset, the average precision of our model is 69%.