http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Human Action Recognition Using Ordinal Measure of Accumulated Motion
Kim, Wonjun,Lee, Jaeho,Kim, Minjin,Oh, Daeyoung,Kim, Changick Hindawi Publishing Corporation 2010 EURASIP journal on advances in signal processing Vol.2010 No.1
<P>This paper presents a method for recognizing human actions from a single query action video. We propose an action recognition scheme based on the ordinal measure of accumulated motion, which is robust to variations of appearances. To this end, we first define the accumulated motion image (AMI) using image differences. Then the AMI of the query action video is resized to a N×N subimage by intensity averaging and a rank matrix is generated by ordering the sample values in the sub-image. By computing the distances from the rank matrix of the query action video to the rank matrices of all local windows in the target video, local windows close to the query action are detected as candidates. To find the best match among the candidates, their energy histograms, which are obtained by projecting AMI values in horizontal and vertical directions, respectively, are compared with those of the query action video. The proposed method does not require any preprocessing task such as learning and segmentation. To justify the efficiency and robustness of our approach, the experiments are conducted on various datasets.</P>
A Texture-Aware Salient Edge Model for Image Retargeting
Wonjun Kim,Changick Kim IEEE 2011 IEEE signal processing letters Vol.18 No.11
<P>Image retargeting aims at adapting a given image to fit the size of arbitrary displays without severe visual distortions. To achieve this task successfully, it is essential to define a reliable image importance map (IIM) since it guides subsequent retargeting procedures. In this letter, we introduce a novel IIM for effective image retargeting. Specifically, we define our IIM by exploiting the higher order statistics (HOS) of the diffusion space for image retargeting. We call it texture-aware salient edge (TASE) map. Based on the proposed TASE map, we obtain visually acceptable retargeting results, even in the cluttered background and in the presence of noise as well. The proposed method has been extensively tested, and experimental results show that the proposed scheme is effective for image retargeting compared to other various state-of-the-art methods.</P>
Contrast Enhancement Using Combined 1-D and 2-D Histogram-Based Techniques
Daeyeong Kim,Changick Kim IEEE Signal Processing Society 2017 IEEE signal processing letters Vol.24 No.6
<P>This letter presents an adaptive contrast enhancement algorithm considering both preservation of the shape of a one-dimensional (1-D) histogram and statistical information on the gray-level differences between neighboring pixels obtained by a 2-D histogram. The proposed system consists of two modules. One is to enhance the entire contrast by stretching the 1-D histogram while preserving the shape of the histogram. The other is to improve the details of nonsmooth areas occurring frequently in input images. These are formulated into a single constrained optimization problem. Compared with several state-of-the-art enhancement algorithms, the proposed algorithm shows highly competitive performance.</P>
멀티미디어 단말기 사용자를 위한 축구 경기 비디오의 점수상자 추출
김원준(Kim Won-Jun),김창익(Kim Changick) 한국방송·미디어공학회 2006 한국방송공학회 학술발표대회 논문집 Vol.2006 No.-
최근 정보통신 기술의 급속한 발전으로 소형 이동형 단말기를 이용한 각종 스포츠 경기 시청이 두드러지게 증가하고 있다. 그럼에도 불구하고 이동형 단말기를 통해 제공되는 영상은 일반 TV나 HDTV용으로 제작되어 소형 이동형 단말기의 사용자가 화면을 통해 스포츠 경기의 상황을 인식하는데 많은 불편함을 주고 있다. 특히, 경기 진행 시간이나 점수를 포함하는 점수상자(scoreboard)는 경기의 상황을 파악하는데 매우 중요한 역할을 하나, 소형 이동형 단말기의 작은 화면에서는 점수상자의 내용을 정확히 인식하기가 쉽지 않다. 이에 본 논문은 많은 사람들이 즐겨보는 축구 경기에 대하여 짧은 학습 기간을 갖는 효율적인 점수상자 추출 방법을 제안하고자 한다. 제안하는 알고리즘은 점수상자와 주변 환경의 밝기 정보를 이용한 점수상자 경계 좌표 추출, 학습을 통한 최적의 경계 좌표 결정, 점수상자 영역 추출 및 확대의 세 단계로 구성된다. 제안하는 알고리즘은 점수상자가 없는 프레임에서도 몇 프레임 앞서 표시된 점수상자의 저장을 통해 디스플레이가 가능하도록 하였다. 다양한 축구경기 비디오에 대한 실험을 통해 제안된 알고리즘이 소형 이동형 단말기 상에서 점수상자를 추출하고 이를 사용자가 쉽게 인식할 수 있도록 확대하여 디스플레이 하는 좋은 해결책임을 보이고자 한다.
A New Approach for Overlay Text Detection and Extraction From Complex Video Scene
Wonjun Kim,Changick Kim IEEE 2009 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.18 No.2
<P>Overlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editor's intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.</P>
Spatiotemporal Saliency Detection Using Textural Contrast and Its Applications
Wonjun Kim,Changick Kim IEEE 2014 IEEE transactions on circuits and systems for vide Vol.24 No.4
<P>Saliency detection has been extensively studied due to its promising contributions for various computer vision applications. However, most existing methods are easily biased toward edges or corners, which are statistically significant, but not necessarily relevant. Moreover, they often fail to find salient regions in complex scenes due to ambiguities between salient regions and highly textured backgrounds. In this paper, we present a novel unified framework for spatiotemporal saliency detection based on textural contrast. Our method is simple and robust, yet biologically plausible; thus, it can be easily extended to various applications, such as image retargeting, object segmentation, and video surveillance. Based on various datasets, we conduct comparative evaluations of 12 representative saliency detection models presented in the literature, and the results show that the proposed scheme outperforms other previously developed methods in detecting salient regions of the static and dynamic scenes.</P>