http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Van Khien Pham,Soo-Hyung Kim,Hyung-Jeong Yang,Guee-Sang Lee 한국스마트미디어학회 2017 스마트미디어저널 Vol.6 No.4
In this paper, a robust text detection method based on edge enhanced contrasting extremal region (CER) is proposed using stroke width transform (SWT) and tensor voting. First, the edge enhanced CER extracts a number of covariant regions, which is a stable connected component from input images. Next, SWT is created by the distance map, which is used to eliminate non-text regions. Then, these candidate text regions are verified based on tensor voting, which uses the input center point in the previous step to compute curve salience values. Finally, the connected component grouping is applied to a cluster closed to characters. The proposed method is evaluated with the ICDAR2003 and ICDAR2013 text detection competition datasets and the experiment results show high accuracy compared to previous methods.
Pham, Van Khien,Kim, Soo-Hyung,Yang, Hyung-Jeong,Lee, Guee-Sang THE KOREAN INSTITUTE OF SMART MEDIA 2017 스마트미디어저널 Vol.6 No.4
In this paper, a robust text detection method based on edge enhanced contrasting extremal region (CER) is proposed using stroke width transform (SWT) and tensor voting. First, the edge enhanced CER extracts a number of covariant regions, which is a stable connected component from input images. Next, SWT is created by the distance map, which is used to eliminate non-text regions. Then, these candidate text regions are verified based on tensor voting, which uses the input center point in the previous step to compute curve salience values. Finally, the connected component grouping is applied to a cluster closed to characters. The proposed method is evaluated with the ICDAR2003 and ICDAR2013 text detection competition datasets and the experiment results show high accuracy compared to previous methods.
Pham, Duc Cuong,Na, Kyung-Hwan,Pham, Van Hung,Yoon, Eui-Sung Korean Tribology Society 2009 KSTLE International Journal Vol.10 No.1
This paper reports an investigation on nanotribological properties of silicon nanochannels coated by a diamond-like carbon (DLC) film. The nanochannels were fabricated on Si (100) wafers by using photolithography and reactive ion etching (RIE) techniques. The channeled surfaces (Si channels) were then further modified by coating thin DLC film. Water contact angle of the modified and unmodified Si surfaces was examined by an anglemeter using the sessile-drop method. Nanotribological properties, namely friction and adhesion forces, of the Si channels coated with DLC (DLC-coated Si channels) were investigated in comparison with those of the flat Si, DLC-coated flat Si (flat DLC), and Si channels, using an atomic force microscope (AFM). Results showed that the DLC-coated Si channels greatly increased hydrophobicity of silicon surfaces. The DLC coating and Si channels themselves individually reduced adhesion and friction forces of the flat Si. Further, the DLC-coated Si channels exhibited the lowest values of these forces, owing to the combined effect of reduced contact area through the channeling and low surface energy of the DLC. This combined modification could prove a promising method for tribological applications at small scales.
Animal Tracking in Infrared Video based on Adaptive GMOF and Kalman Filter
Van Khien Pham,Guee Sang Lee 한국스마트미디어학회 2016 스마트미디어저널 Vol.5 No.1
The major problems of recent object tracking methods are related to the inefficient detection of moving objects due to occlusions, noisy background and inconsistent body motion. This paper presents a robust method for the detection and tracking of a moving in infrared animal videos. The tracking system is based on adaptive optical flow generation, Gaussian mixture and Kalman filtering. The adaptive Gaussian model of optical flow (GMOF) is used to extract foreground and noises are removed based on the object motion. Kalman filter enables the prediction of the object position in the presence of partial occlusions, and changes the size of the animal detected automatically along the image sequence. The presented method is evaluated in various environments of unstable background because of winds, and illuminations changes. The results show that our approach is more robust to background noises and performs better than previous methods.