http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Automatic confidence adjustment of visual cues in model-based camera tracking
Park, Hanhoon,Oh, Jihyun,Seo, Byung-Kuk,Park, Jong-Il John Wiley Sons, Ltd. 2010 Computer Animation and Virtual Worlds (Print) Vol.21 No.2
<P>Model-based camera tracking is a technology that estimates a precise camera pose based on visual cues (e.g., feature points, edges) extracted from camera images given a 3D scene model and a rough camera pose. This paper proposes an automatic method for flexibly adjusting the confidence of visual cues in model-based camera tracking. The adjustment is based on the conditions of the target object/scene and the reliability of the initial or previous camera pose. Under uncontrolled or less-controlled working environments, the proposed object-adaptive tracking method works flexibly at 20 frames per second on an ultra mobile personal computer (UMPC) with an average tracking error within 3 pixels when the camera image resolution is 320 by 240 pixels. This capability enabled the proposed method to be successfully applied to a mobile augmented reality (AR) guidance system for a museum. Copyright © 2009 John Wiley & Sons, Ltd.</P> <B>Graphic Abstract</B> <P>Object-adaptive camera tracking. The red wire lines represent the 3D graphic model of the objects. The first-row images are the initial tracking results by ultrasonic and inertial sensors. The second-, third-, and fourth-row images are the results when the η values are 0, 0.3, and 1, respectively. The images marked with black boxes are the results by the object-adaptive tracking method, where the η values are automatically adjusted to the optimal value for each object. <img src='wiley_img_2010/15464261-2010-21-2-CAV321-gra001.gif' alt='wiley_img_2010/15464261-2010-21-2-CAV321-gra001'> </P>
Hanhoon Park,Byung-Kuk Seo,Jong-Il Park IEEE 2010 IEEE transactions on circuits and systems for vide Vol.20 No.5
<P>In projection-based augmented reality (AR) alleviating visual distraction of patterns has been a great challenge. As a representative one, a method of embedding patterns and their complements (hereafter, we call the pairs complementary patterns) into AR images alternately has been proposed recently. This paper presents subjective evaluation results and their statistical analysis on the visual perceptibility of embedding complementary patterns in different ways in a standard hardware environment. Then, we explore the constraints for embedded complementary patterns to be less perceptible. As expected, high projector refresh rate and low pattern strength were the general conditions for a decrease in the perception of embedded complementary patterns. However, reducing pattern size and projecting complementary patterns with an interval were also among the factors affecting the results. Detailed constraints are given in the experimental results. Also, we present which constraint is more dominant for pattern perceptibility.</P>
Rapid Generation of the State Codebook in Side Match Vector Quantization
PARK, Hanhoon,PARK, Jong-Il 'Institute of Electronics, Information and Communi 2017 IEICE transactions on information and systems Vol.100e.d No.8
<P>Side match vector quantization (SMVQ) has been originally developed for image compression and is also useful for steganography. SMVQ requires to create its own state codebook for each block in both encoding and decoding phases. Since the conventional method for the state codebook generation is extremely time-consuming, this letter proposes a fast generation method. The proposed method is tens times faster than the conventional one without loss of perceptual visual quality.</P>
비간섭 프로젝션 기반 증강현실을 위한 컨텐츠 적응형 패턴 은닉
박한훈(Hanhoon Park),이문현(Moon-Hyun Lee),서병국(Byung-Kuk Seo),박종일(Jong-Il Park),진윤종(Yoonjong Jin) 한국HCI학회 2007 한국HCI학회 논문지 Vol.2 No.1
최근 보색 패턴(complementary pattern)을 이용한 비간섭 프로젝션 기반 증강현실 기술이 제안되었으며, 가상 스튜디오에 활용하는 방안이 모색되고 있다. 그러나, 관련 기술은 삽입된 보색 패턴의 비가시성이 보정 성능과 상충된다는 문제를 안고 있다. 본 논문에서는 이러한 보색 패턴의 비가시성과 보정 성능 사이의 상충관계를 완화하기 위해 컨텐츠 적응형 패턴 은닉 기술을 제안한다. 증강현실 영상의 색감 및 텍스처의 복잡도에 따라 지역적으로(locally) 다른 채널 및 세기로 보색 패턴을 삽입한다. 우선, YIQ 컬러 공간에서 표현된 증강현실 영상을 균일한 크기의 영역으로 나눈 다음, 각 영역에 대해 I 성분이 지배적이면 Q 채널에 패턴을 삽입하고 Q 성분이 지배적이면 I 채널에 패턴을 삽입한다. 또한, 각 영역에 대해 미분 필터를 이용하여 텍스처의 복잡도를 계산한 후, 텍스처의 복잡도가 크다면 강한 패턴을, 복잡도가 작으면 약한 패턴을 삽입한다. 다양한 실험 및 사용자 평가를 통해, 제안된 방법은 기존 방법에 비해 크게 두 가지 상반되는 장점을 가짐을 확인하였다. 스크린의 기하 및 컬러 정보를 획득하는 성능 면에서 제안된 방법이 기존의 방법과 유사하도록 채널 및 패턴의 세기를 결정한다면, 기존의 방법에 비해 패턴의 비가시성이 크게 개선된다. 반대로, 제안된 방법의 패턴의 비가시성이 기존의 방법과 유사하도록 채널 및 패턴의 세기를 결정한다면, 기존의 방법에 비해 스크린의 기하 및 컬러 정보를 획득하는 성능이 크게 개선된다. A nonintrusive projection-based AR approach using complementary pattern has been recently proposed and applied to virtual studio. However, the approach faces the tradeoff between the pattern imperceptibility and compensation accuracy. To alleviate the tradeoff, we propose a content adaptive pattern concealment approach. The projector input images (AR images) are divided into rectangular regions and spatial variation and color distribution are computed in the regions. Based on the spatial variation and color distribution, we embed locally different strength of pattern images into different color channels. It is demonstrated that the proposed approach has two opposite advantages by comparing it with the previous(non-adaptive) approach through a variety of experiments and subjective evaluation. Our content adaptive approach can obtain the same performance using weaker pattern than the previous approach and thus significantly improve the imperceptibility of the pattern. On the contrary, our content adaptive approach can make strong pattern less perceptible and thus produce better compensation results.