http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Real-Time Implementation of Human Detection in Thermal Imagery Based on CNN
Nazeer Shahid,유광현,Tan Dat Trinh,신도성,김진영 한국정보기술학회 2019 한국정보기술학회논문지 Vol.17 No.1
In this paper, an effective human detection method in thermal imaging is proposed using background modeling and convolution neural network(CNN). For real-time implementation, the background modeling is done by modified running Gaussian average and the CNN-based human classification is performed for only detected foreground objects. To enhance human detection accuracy, morphological operators and ellipse testing are adopted to extract Region of Interest. Also, three CNN models with different input sizes and voting method are trained using our own dataset. For real-time system, the whole system is implemented in C++ and it process more than 30 fps with high accuracy.
열화상 비디오에서 합성곱 신경망 기반의 실시간 인간 탐지
나지르 샤히드(Nazeer Shahid),유광현(Gwang-Hyun Yu),황성민(Seong-Min Hwang),보 호앙 트롱(Vo Hoang Trong),김진영(Jin-Young Kim) 대한전자공학회 2018 대한전자공학회 학술대회 Vol.2018 No.11
In this paper, we have proposed a Convolution Neural Network based human classification technique that efficiently operates in real time. Background subtraction is done using improved Running Gaussian Average to get the initial background model. Background updating is implemented using selectivity updating and random selection of background pixel from every new frame. Morphology is applied to extract ROIs from each frame. For classification, CNN model is trained and tested with our own dataset. For incorporating the model with real-time application, we neglect the nodes from computational graph that have no weights and convert other weights to constants. With this trained CNN model, ROI is classified as human or non-human in real-time. The processing time depends on number of ROI present in the frame. For our testing data, average processing time is 25fps.
다양한 합성곱 신경망 접근법을 이용한 잡초 이미지 분류
보 호앙 트롱(Vo Hoang Trong),유광현(Gwang-Hyun Yu),나지르 샤히드(Nazeer Shahid),황성민(Seong-Min Hwang),김진영(Jin-Young Kim) 대한전자공학회 2018 대한전자공학회 학술대회 Vol.2018 No.11
In this paper, we present a multimodal approach for weeds classification. We apply the transfer learning to classify on Convolutional Neural Networks(CNN) VGG16, Inception-Resnet, and Mobilenet separately. Then, we combine probabilities returned from each model, and start voting by scoring classes. We choose a class that has the highest score to conclude the final classification. We experiment on own weeds dataset and achieve 95.927% accuracy after voting on fusion classification.