http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
원웅재(Woong-Jae Won),손준우(JoonWoo Son),정우영(Wooyoung Jung) 한국자동차공학회 2009 한국자동차공학회 부문종합 학술대회 Vol.2009 No.4
In this paper we proposed a simple vehicle in-out object detection model to implement vision-based interactive intelligent vehicle. In order to simply localize vehicle in-out target object, we consider target object feature maximizing method as reflecting target color feature characteristic. Moreover, we adopt gaussian pyramid image based center surround and normalization algorithms to not only reinforce target object area, but also inhibit background noise influence. We also limit two target objects which are road traffic sign for vehicle out object and driver's face for vehicle for describing proposed model and making experiment. In experiment result, the proposed model can successfully localize task specific in-out target object areas.
원웅재(Woong-Jae Won),김태훈(Tae Hun Kim),최민국(Min-Kook Choi),권순(Soon Kown) 한국자동차공학회 2018 한국자동차공학회 부문종합 학술대회 Vol.2018 No.6
Real-time driving scene understanding system have been received more attention from may autonomous driving research community as following the advent of deep learning technology. In this paper, we proposed real-time multi-object detection model based on Convolution Neural Network(CNN) deep learning model. In order to reduce computational load for multi-scale object detection, we consider Rezoom layer rather than conventional methods which are based on muti-scale template(Anchors) and feature approach. Moreover, in order to enhance of detection performance for occluded/small size object in driving road scene, we consider simple aggregation layer which can preserve small receptive field feature information in deep CNN feature domain. Experimental results for KITTI datasets show that the proposed model can successfully detect multi-objects in road driving scene.
준 지도 학습 기반의 공용 데이터셋을 활용한 물체 인식 네트워크 학습 연구
최민국(Min-Kook Choi ),박재형(Jaehyeong Park),이진희(Jin-Hee Lee),원웅재(Woong Jae Won),김진철(Jincheol Kim),권순(Soon Kwon) 대한전자공학회 2018 대한전자공학회 학술대회 Vol.2018 No.6
Recently, the accuracy of the visual recognition techniques using the deep learning has been greatly improved due to the enhancement of the training strategy. We introduces a case study in which deep learning based object recognition network is learned by using public dataset and applied to autonomous driving application. In this work, we used MS-COCO detection 2017 dataset for training and evaluation of object recognition network. In order to improve the generalization performance, we applied a semi-supervised training using co-occurrence matrix analysis to deformable convolutional neural networks (D-ConvNets). We can confirm the improvement of the quantitative performance using the object recognition network with the proposed semi-supervised learning (SSL) technique, and it was confirmed qualitatively encouraging results in various situations in the real vehicle environment.
2D+3D Active Appearance Model을 이용한 운전자 응시 영역 추정
최현철(Hyun-Chul Choi),김삼용(Sam-Yong Kim),오상훈(Sang-Hoon Oh),오세영(Se-Young Oh),원웅재(Woong-Jae Won) 한국자동차공학회 2007 한국자동차공학회 춘 추계 학술대회 논문집 Vol.- No.-
This paper proposes the use of active appearance model (AAM) to track the driver's gazing direction and estimate the gazing region on the front of the driver's seat for driver assistance system based on vision in vehicles. Many face tracking algorithms have been developed and broadly used as one of the methods for human machine interface. Among them, 2D+3D active appearance model is very robust 3D face tracking technique. This technique gives all the information for face which includes the size, position and pose of face features, i.e., eye, mouth, nose. In our proposed system, 3D mesh and its alignment for the tracking face are obtained from the procedure of AAM ftting. The gazing direction is assumed as the front of the tracked face. This gazing direction is used to estimate the gazing region on the front of the driver's seat which is divided into four regions, i.e., front window, left, right side mirror, and instrument cluster. Experimental result shows that the performance of our proposed method is good in estimating driver's gazing region.
영상기반의 End-to-end 자율주행 알고리즘 개발을 위한 데이터셋 및 평가환경 구축
권순(Soon Kwon),박재형(Jaehyeong Park),정희철(Heechul Jung),정지훈(Jihun Jung),최민국(Min-Kook Choi),Iman R. T.(Iman R. Tayibnapis),이진희(Jin-Hee Lee),원웅재(Woong-Jae Won),김광회(Kwang-Hoe Kim),윤성훈(Sung-Hoon Youn),김태훈(Tae 대한전자공학회 2018 대한전자공학회 학술대회 Vol.2018 No.6
In this paper, we constructed a public dataset for training and evaluation of an algorithm model for Vision based Autonomous Steering Control(V-ASC), and built a benchmark environment to analyze and provide qualitative and quantitative evaluation results. We also developed a baseline V-ASC model based on the handcrafted feature and the newly proposed convolutional neural network (CNN) based end-to-end driving model to verify the evaluation environment of the constructed dataset and simulator. Through the comparative evaluation between the models, we confirmed that the proposed evaluation framework is effective for performance analysis of V-ASC.