RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Real-time interactive modeling and scalable multiple object tracking for AR

        Kim, K.,Lepetit, V.,Woo, W. Pergamon Press ; Elsevier Science Ltd 2012 Computers & graphics Vol.36 No.8

        We propose a real-time solution for modeling and tracking multiple 3D objects in unknown environments for Augmented Reality. The proposed solution consists of both scalable tracking and interactive modeling. Our contribution is twofold: First, we show how to scale with the number of objects using keyframes. This is done by combining recent techniques for image retrieval and online Structure from Motion, which can be run in parallel. As a result, tracking 50 objects in 3D can be done within 6-35ms per frame, even under difficult conditions for tracking. Second, we propose a method to let the user add new objects very quickly. The user simply has to select in an image a 2D region lying on the object. A 3D primitive is then fitted to the features within this region, and adjusted to create the object 3D model. We demonstrate the modeling of polygonal and circular-based objects. In practice, this procedure takes less than a minute.

      • SCISCIESCOPUS

        Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking

        Youngmin Park,Lepetit, V.,Woontack Woo IEEE 2011 IEEE transactions on visualization and computer gr Vol.17 No.11

        <P>We present a method that is able to track several 3D objects simultaneously, robustly, and accurately in real time. While many applications need to consider more than one object in practice, the existing methods for single object tracking do not scale well with the number of objects, and a proper way to deal with several objects is required. Our method combines object detection and tracking: frame-to-frame tracking is less computationally demanding but is prone to fail, while detection is more robust but slower. We show how to combine them to take the advantages of the two approaches and demonstrate our method on several real sequences.</P>

      • Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality

        Youngmin Park,Lepetit, V.,Woontack Woo IEEE 2012 IEEE transactions on visualization and computer gr Vol.18 No.9

        <P>The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.</P>

      • SCISCIESCOPUS

        Video-Based In Situ Tagging on Mobile Phones

        Wonwoo Lee,Youngmin Park,Lepetit, V.,Woontack Woo IEEE 2011 IEEE transactions on circuits and systems for vide Vol.21 No.10

        <P>We propose a novel way to augment a real-world scene with minimal user intervention on a mobile phone; the user only has to point the phone camera to the desired location of the augmentation. Our method is valid for horizontal or vertical surfaces only, but this is not a restriction in practice in manmade environments, and it avoids going through any reconstruction of the 3-D scene, which is still a delicate process on a resource-limited system like a mobile phone. Our approach is inspired by recent work on perspective patch recognition, but we adapt it for better performances on mobile phones. We reduce user interaction with real scenes by exploiting the phone accelerometers to relax the need for fronto-parallel views. As a result, we can learn a planar target in situ from arbitrary viewpoints and augment it with virtual objects in real-time on a mobile phone.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼