RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Fast Global Image Smoothing Based on Weighted Least Squares

        Dongbo Min,Sunghwan Choi,Jiangbo Lu,Bumsub Ham,Kwanghoon Sohn,Do, Minh N. IEEE 2014 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.23 No.12

        <P>This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory- and computation-intensive large linear system, defined over a d -dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ~10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 <; γ <;2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.</P>

      • KCI등재

        2차원/3차원 자유시점 비디오 재생을 위한 가상시점 합성시스템

        민동보(Dongbo Min),손광훈(Kwanghoon Sohn) 대한전자공학회 2008 電子工學會論文誌-SP (Signal processing) Vol.45 No.4

        3DTV를 위한 핵심 기술 중의 하나인 다시점 영상에서 변이를 추정하고 가상시점을 합성하는 새로운 방식을 제안한다. 다시점 영상에서 변이를 효율적이고 정확하게 추정하기 위해 준 N-시점 & N-깊이 구조를 제안한다. 이 구조는 이웃한 영상의 정보를 이용하여 변이 추정 시 발생하는 계산상의 중복을 줄인다. 제안 방식은 사용자에게 2D와 3D 자유시점을 제공하며, 사용자는 자유시점 비디오의 모드를 선택할 수 있다. 실험 결과는 제안 방식이 정확한 변이 지도를 제공하며, 합성된 영상이 사용자에게 자연스러운 자유시점 비디오를 제공한다는 것을 보여준다. In this paper, we propose a new approach for efficient multiview stereo matching and virtual view generation, which are key technologies for 3DTV. We propose semi N-view & N-depth framework to estimate disparity maps efficiently and correctly. This framework reduces the redundancy on disparity estimation by using the information of neighboring views. The proposed method provides a user 2D/3D freeview video, and the user can select 2D/3D modes of freeview video. Experimental results show that the proposed method yields the accurate disparity maps and the synthesized novel view is satisfactory enough to provide user seamless freeview videos.

      • DASC: Robust Dense Descriptor for Multi-Modal and Multi-Spectral Correspondence Estimation

        Kim, Seungryong,Min, Dongbo,Ham, Bumsub,Do, Minh N.,Sohn, Kwanghoon IEEE 2017 IEEE transactions on pattern analysis and machine Vol.39 No.9

        <P>Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence between multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate dense multi-modal and multi-spectral correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of dense multi-modal and multi-spectral correspondences.</P>

      • Revisiting the Relationship Between Adaptive Smoothing and Anisotropic Diffusion With Modified Filters

        Bumsub Ham,Dongbo Min,Kwanghoon Sohn IEEE 2013 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.22 No.3

        <P>Anisotropic diffusion has been known to be closely related to adaptive smoothing and discretized in a similar manner. This paper revisits a fundamental relationship between two approaches. It is shown that adaptive smoothing and anisotropic diffusion have different theoretical backgrounds by exploring their characteristics with the perspective of normalization, evolution step size, and energy flow. Based on this principle, adaptive smoothing is derived from a second order partial differential equation (PDE), not a conventional anisotropic diffusion, via the coupling of Fick's law with a generalized continuity equation where a “source” or “sink” exists, which has not been extensively exploited. We show that the source or sink is closely related to the asymmetry of energy flow as well as the normalization term of adaptive smoothing. It enables us to analyze behaviors of adaptive smoothing, such as the maximum principle and stability with a perspective of a PDE. Ultimately, this relationship provides new insights into application-specific filtering algorithm design. By modeling the source or sink in the PDE, we introduce two specific diffusion filters, the robust anisotropic diffusion and the robust coherence enhancing diffusion, as novel instantiations which are more robust against the outliers than the conventional filters.</P>

      • Probability-Based Rendering for View Synthesis

        Bumsub Ham,Dongbo Min,Changjae Oh,Do, Minh N.,Kwanghoon Sohn IEEE 2014 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.23 No.2

        <P>In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.</P>

      • A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis

        Donghyun Kim,Dongbo Min,Kwanghoon Sohn IEEE 2008 IEEE transactions on broadcasting Vol.54 No.2

        <P>Stereoscopic video generation methods can produce stereoscopic content from conventional video filmed with monoscopic cameras. In this paper, we propose a stereoscopic video generation method using motion analysis which converts motion into disparity values and considers multi-user conditions and the characteristics of the display device. The field of view and the maximum and minimum disparity values were calculated in the stereoscopic display characterization stage and were then applied to various types of 3D displays. After motion estimation, we used three cues to decide the scale factor of motion-to-disparity conversion. These cues were the magnitude of motion, camera movements and scene complexity. A subjective evaluation showed that the proposed method generated more satisfactory video sequence.</P>

      • SCISCIESCOPUS

        Reliability-Based Multiview Depth Enhancement Considering Interview Coherence

        Jinwook Choi,Dongbo Min,Kwanghoon Sohn Institute of Electrical and Electronics Engineers 2014 IEEE Transactions on Circuits and Systems for Vide Vol. No.

        <P>Color-plus-depth video format has been increasingly popular in 3-D video applications, such as auto-stereoscopic 3-D TV and freeview TV. The performance of these applications is heavily dependent on the quality of depth maps since intermediate views are synthesized using the corresponding depth maps. This paper presents a novel framework for obtaining high-quality multiview color-plus-depth video using a hybrid sensor, which consists of multiple color cameras and depth sensors. Given multiple high-resolution color images and low quality depth maps obtained from the color cameras and depth sensors, we improve the quality of the depth map corresponding to each color view by increasing its spatial resolution and enforcing interview coherence. Specifically, a new up-sampling method considering the interview coherence is proposed to enhance multiview depth maps. This approach can improve the performance of the existing up-sampling algorithms, such as joint bilateral up-sampling and weighted mode filtering, which have been developed to enhance a single-view depth map only. In addition, an adaptive approach of fusing multiple input low-resolution depth maps is proposed based on the reliability that considers camera geometry and depth validity. The proposed framework can be extended into the temporal domain for temporally consistent depth maps. Experimental results demonstrate that the proposed method provides better multiview depth quality than the conventional single-view-based methods. We also show that it provides comparable results, yet much more efficiently, to other fusion approaches that employ both depth sensors and stereo matching algorithm together. Moreover, it is shown that the proposed method significantly reduces bit rates required to compress the multiview color-plus-depth video.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼