http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
2차원/3차원 자유시점 비디오 재생을 위한 가상시점 합성시스템
민동보(Dongbo Min),손광훈(Kwanghoon Sohn) 대한전자공학회 2008 電子工學會論文誌-SP (Signal processing) Vol.45 No.4
3DTV를 위한 핵심 기술 중의 하나인 다시점 영상에서 변이를 추정하고 가상시점을 합성하는 새로운 방식을 제안한다. 다시점 영상에서 변이를 효율적이고 정확하게 추정하기 위해 준 N-시점 & N-깊이 구조를 제안한다. 이 구조는 이웃한 영상의 정보를 이용하여 변이 추정 시 발생하는 계산상의 중복을 줄인다. 제안 방식은 사용자에게 2D와 3D 자유시점을 제공하며, 사용자는 자유시점 비디오의 모드를 선택할 수 있다. 실험 결과는 제안 방식이 정확한 변이 지도를 제공하며, 합성된 영상이 사용자에게 자연스러운 자유시점 비디오를 제공한다는 것을 보여준다. In this paper, we propose a new approach for efficient multiview stereo matching and virtual view generation, which are key technologies for 3DTV. We propose semi N-view & N-depth framework to estimate disparity maps efficiently and correctly. This framework reduces the redundancy on disparity estimation by using the information of neighboring views. The proposed method provides a user 2D/3D freeview video, and the user can select 2D/3D modes of freeview video. Experimental results show that the proposed method yields the accurate disparity maps and the synthesized novel view is satisfactory enough to provide user seamless freeview videos.
구재원(Jaywon Koo),민동보(Dongbo Min) 대한전자공학회 2021 대한전자공학회 학술대회 Vol.2021 No.6
While stereo matching based on deep networks has shown impressive results in daytime images, the performance is significantly degraded in nighttime images due to the lack of training data with ground truth and poor illumination condition. To overcome these issues, numerous methods have been proposed based on image-to-image translation. These approaches, however, often fail to predict accurate depth maps, when a domain gap between source (daytime) and target (nighttime) domains becomes large. In this paper, we propose a novel method for nighttime stereo matching to resolve such a performance degradation of the existing methods by a large domain gap. The large domain gap that often occurs between the day and night images is addressed using a two-step approach that consists of the image-to-image translation and domain adaptation. By utilizing additional pair of nighttime and daytime datasets which have smaller domain gap, our proposed model learns better image-to-image translation networks while jointly trained two domain adaptation networks explore to adapt to domains that have large domain gap. Extensive experiments on various datasets demonstrate that the proposed method outperforms state-of-the-arts approaches for nighttime stereo matching with a meaningful margin.