RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Proposal Flow: Semantic Correspondences from Object Proposals

        Ham, Bumsub,Cho, Minsu,Schmid, Cordelia,Ponce, Jean IEEE 2018 IEEE transactions on pattern analysis and machine Vol.40 No.7

        <P>Finding image correspondences remains a challenging problem in the presence of intra-class variations and large changes in scene layout. Semantic flow methods are designed to handle images depicting different instances of the same object or scene category. We introduce a novel approach to semantic flow, dubbed proposal flow, that establishes reliable correspondences using object proposals. Unlike prevailing semantic flow approaches that operate on pixels or regularly sampled local regions, proposal flow benefits from the characteristics of modern object proposals, that exhibit high repeatability at multiple scales, and can take advantage of both local and geometric consistency constraints among proposals. We also show that the corresponding sparse proposal flow can effectively be transformed into a conventional dense flow field. We introduce two new challenging datasets that can be used to evaluate both general semantic flow techniques and region-based approaches such as proposal flow. We use these benchmarks to compare different matching algorithms, object proposals, and region features within proposal flow, to the state of the art in semantic flow. This comparison, along with experiments on standard datasets, demonstrates that proposal flow significantly outperforms existing semantic flow methods in various settings.</P>

      • Revisiting the Relationship Between Adaptive Smoothing and Anisotropic Diffusion With Modified Filters

        Bumsub Ham,Dongbo Min,Kwanghoon Sohn IEEE 2013 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.22 No.3

        <P>Anisotropic diffusion has been known to be closely related to adaptive smoothing and discretized in a similar manner. This paper revisits a fundamental relationship between two approaches. It is shown that adaptive smoothing and anisotropic diffusion have different theoretical backgrounds by exploring their characteristics with the perspective of normalization, evolution step size, and energy flow. Based on this principle, adaptive smoothing is derived from a second order partial differential equation (PDE), not a conventional anisotropic diffusion, via the coupling of Fick's law with a generalized continuity equation where a “source” or “sink” exists, which has not been extensively exploited. We show that the source or sink is closely related to the asymmetry of energy flow as well as the normalization term of adaptive smoothing. It enables us to analyze behaviors of adaptive smoothing, such as the maximum principle and stability with a perspective of a PDE. Ultimately, this relationship provides new insights into application-specific filtering algorithm design. By modeling the source or sink in the PDE, we introduce two specific diffusion filters, the robust anisotropic diffusion and the robust coherence enhancing diffusion, as novel instantiations which are more robust against the outliers than the conventional filters.</P>

      • Robust Guided Image Filtering Using Nonconvex Potentials

        Ham, Bumsub,Cho, Minsu,Ponce, Jean IEEE 2018 IEEE transactions on pattern analysis and machine Vol.40 No.1

        <P>Filtering images using a guidance signal, a process called guided or joint image filtering, has been used in various tasks in computer vision and computational photography, particularly for noise reduction and joint upsampling. This uses an additional guidance signal as a structure prior, and transfers the structure of the guidance signal to an input image, restoring noisy or altered image structure. The main drawbacks of such a data-dependent framework are that it does not consider structural differences between guidance and input images, and that it is not robust to outliers. We propose a novel SD (for static/dynamic) filter to address these problems in a unified framework, and jointly leverage structural information from guidance and input images. Guided image filtering is formulated as a nonconvex optimization problem, which is solved by the majorize-minimization algorithm. The proposed algorithm converges quickly while guaranteeing a local minimum. The SD filter effectively controls the underlying image structure at different scales, and can handle a variety of types of data from different sensors. It is robust to outliers and other artifacts such as gradient reversal and global intensity shift, and has good edge-preserving smoothing properties. We demonstrate the flexibility and effectiveness of the proposed SD filter in a variety of applications, including depth upsampling, scale-space filtering, texture removal, flash/non-flash denoising, and RGB/NIR denoising.</P>

      • Depth Superresolution by Transduction

        Ham, Bumsub,Dongbo Min,Kwanghoon Sohn IEEE 2015 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.24 No.5

        <P>This paper presents a depth superresolution (SR) method that uses both of a low-resolution (LR) depth image and a high-resolution (HR) intensity image. We formulate depth SR as a graph-based transduction problem. In particular, the HR intensity image is represented as an undirected graph, in which pixels are characterized as vertices, and their relations are encoded as an affinity function. When the vertices initially labeled with certain depth hypotheses (from the LR depth image) are regarded as input queries, all the vertices are scored with respect to the relevances to these queries by a classifying function. Each vertex is then labeled with the depth hypothesis that receives the highest relevance score. We design the classifying function by considering the local and global structures of the HR intensity image. This approach enables us to address a depth bleeding problem that typically appears in current depth SR methods. Furthermore, input queries are assigned in a probabilistic manner, making depth SR robust to noisy depth measurements. We also analyze existing depth SR methods in the context of transduction, and discuss their theoretic relations. Intensive experiments demonstrate the superiority of the proposed method over state-of-the-art methods both qualitatively and quantitatively.</P>

      • Probability-Based Rendering for View Synthesis

        Bumsub Ham,Dongbo Min,Changjae Oh,Do, Minh N.,Kwanghoon Sohn IEEE 2014 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.23 No.2

        <P>In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.</P>

      • Robust Scale-Space Filter Using Second-Order Partial Differential Equations

        Ham, Bumsub,Min, Dongbo,Sohn, Kwanghoon IEEE 2012 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.21 No.9

        <P>This paper describes a robust scale-space filter that adaptively changes the amount of flux according to the local topology of the neighborhood. In a manner similar to modeling heat or temperature flow in physics, the robust scale-space filter is derived by coupling Fick's law with a generalized continuity equation in which the source or sink is modeled via a specific heat capacity. The filter plays an essential part in two aspects. First, an evolution step size is adaptively scaled according to the local structure, enabling the proposed filter to be numerically stable. Second, the influence of outliers is reduced by adaptively compensating for the incoming flux. We show that classical diffusion methods represent special cases of the proposed filter. By analyzing the stability condition of the proposed filter, we also verify that its evolution step size in an explicit scheme is larger than that of the diffusion methods. The proposed filter also satisfies the maximum principle in the same manner as the diffusion. Our experimental results show that the proposed filter is less sensitive to the evolution step size, as well as more robust to various outliers, such as Gaussian noise, impulsive noise, or a combination of the two.</P>

      • 고정점 반복을 이용한 양방향 필터의 수렴 분석

        함범섭(Bumsub Ham),손광훈(Kwanghoon Sohn) 한국방송·미디어공학회 2011 한국방송공학회 학술발표대회 논문집 Vol.2011 No.7

        양방향 필터 (Bilateral filter)는 에지 보전 평활화 필터로써 디노이징, 반사 제거, 스테레오 매칭 등 다양한 분야에서 사용되고 있다. 이는 기존의 가우시안 필터에 사용되는 공간 도메인 커널 (spatial kernel)이외에 강도 도메인 커널 (range kernel)을 추가로 사용하여 비슷한 강도의 픽셀에 높은 가중치를 부여함으로써 에지를 보전하면서 평활화를 한다. 또한 양방향 필터는 비등방성 확산 필터 (Anisotropic diffusion filter)와 달리 항상 수렴을 보장한다. 따라서 본 논문에서는 고정점 반복 이론을 적용하여 양방향 필터의 수렴을 수학적으로 증명한다.

      • OCEAN: Object-centric arranging network for self-supervised visual representations learning

        Oh, Changjae,Ham, Bumsub,Kim, Hansung,Hilton, Adrian,Sohn, Kwanghoon Elsevier 2019 expert systems with applications Vol.125 No.-

        <P><B>Abstract</B></P> <P>Learning visual representations plays an important role in computer vision and machine learning applications. It facilitates a model to understand and perform high-level tasks intelligently. A common approach for learning visual representations is supervised one which requires a huge amount of human annotations to train the model. This paper presents a self-supervised approach which learns visual representations from input images without human annotations. We learn the correct arrangement of object proposals to represent an image using a convolutional neural network (CNN) without any manual annotations. We hypothesize that the network trained for solving this problem requires the embedding of semantic visual representations. Unlike existing approaches that use uniformly sampled patches, we relate object proposals that contain prominent objects and object parts. More specifically, we discover the representation that considers overlap, inclusion, and exclusion relationship of proposals as well as their relative position. This allows focusing on potential objects and parts rather than on clutter. We demonstrate that our model outperforms existing self-supervised learning methods and can be used as a generic feature extractor by applying it to object detection, classification, action recognition, image retrieval, and semantic matching tasks.</P> <P><B>Highlights</B></P> <P> <UL> <LI> A self-supervised learning which does not require human annotations for training CNN. </LI> <LI> Learning the correct arrangement of object proposals to represent an image by CNN. </LI> <LI> Demonstrating the advantage of our model by applying it to PASCAL VOC datasets. </LI> <LI> Application to other vision tasks including image retrieval and semantic matching. </LI> </UL> </P>

      • SCISCIESCOPUS

        Mahalanobis Distance Cross-Correlation for Illumination-Invariant Stereo Matching

        Seungryong Kim,Bumsub Ham,Bongjoe Kim,Kwanghoon Sohn Institute of Electrical and Electronics Engineers 2014 IEEE Transactions on Circuits and Systems for Vide Vol. No.

        <P>A robust similarity measure called the Mahalanobis distance cross-correlation (MDCC) is proposed for illumination-invariant stereo matching, which uses a local color distribution within support windows. It is shown that the Mahalanobis distance between the color itself and the average color is preserved under affine transformation. The MDCC converts pixels within each support window into the Mahalanobis distance transform (MDT) space. The similarity between MDT pairs is then computed using the cross-correlation with an asymmetric weight function based on the Mahalanobis distance. The MDCC considers correlation on cross-color channels, thus providing robustness to affine illumination variation. Experimental results show that the MDCC outperforms state-of-the-art similarity measures in terms of stereo matching for image pairs taken under different illumination conditions.</P>

      • Space-Time Hole Filling With Random Walks in View Extrapolation for 3D Video

        Sunghwan Choi,Bumsub Ham,Kwanghoon Sohn IEEE 2013 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.22 No.6

        <P>In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼