http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
An FPGA-based Parallel Hardware Architecture for Real-time Eye Detection
Dongkyun Kim,Junhee Jung,Thuy Tuong Nguyen,Daijin Kim,Munsang Kim,Key Ho Kwon,Jae Wook Jeon 대한전자공학회 2012 Journal of semiconductor technology and science Vol.12 No.2
Eye detection is widely used in applications, such as face recognition, driver behavior analysis, and human-computer interaction. However, it is difficult to achieve real-time performance with software-based eye detection in an embedded environment. In this paper, we propose a parallel hardware architecture for real-time eye detection. We use the AdaBoost algorithm with modified census transform(MCT) to detect eyes on a face image. We parallelize part of the algorithm to speed up processing. Several downscaled pyramid images of the eye candidate region are generated in parallel using the input face image. We can detect the left and the right eye simultaneously using these downscaled images. The sequential data processing bottleneck caused by repetitive operation is removed by employing a pipelined parallel architecture. The proposed architecture is designed using Verilog HDL and implemented on a Virtex-5 FPGA for prototyping and evaluation. The proposed system can detect eyes within 0.15 ㎳ in a VGA image.
Human-Robot Interaction in Real Environments by Audio-Visual Integration
Hyun-Don Kim,Jong-Suk Choi,Munsang Kim 대한전기학회 2007 International Journal of Control, Automation, and Vol.5 No.1
In this paper, we developed not only a reliable sound localization system including a VAD (Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA (Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.