RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      SCIE SCOPUS KCI등재

      2D-to-3D Conversion System using Depth Map Enhancement = 2D-to-3D Conversion System using Depth Map Enhancement

      한글로보기

      https://www.riss.kr/link?id=A103334324

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer`s main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.
      번역하기

      This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather...

      This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer`s main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

      더보기

      참고문헌 (Reference)

      1 W. J. Tam., "Three-dimensional TV : A Novel Method for Generating Surrogate Depth Maps using Colour Information" 2009

      2 S. J. Gortler., "The lumigraph" 43-54, 1996

      3 M. Guttmann., "Semi-automatic Stereo Extraction from Video Footage" 136-142, 2009

      4 R. Phan., "Semi-automatic 2D to 3D Image Conversion Using a Hybrid Random Walks and Graph Cuts based Approach" 897-900, 2011

      5 X. Cao., "Semi-Automatic 2D-to-3D Conversion Using Disparity Propagation" 57 (57): 491-499, 2011

      6 X. Hou, "Saliency Detection : A Spectral Residual Approach" 1-8, 2007

      7 J. Kim., "Robust MRF-based Object Tracking and Graph-cut-based Contour Refinement for High Quality 2D to 3D Video Conversion" 358-363, 2011

      8 R. Girshick., "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation" 580-587, 2014

      9 L. Grady, "Random Walks for Image Segmentation" 28 (28): 1768-1783, 2006

      10 X. Sun., "Photo Assessment based on Computational Visual Attention Model" 541-544, 2009

      1 W. J. Tam., "Three-dimensional TV : A Novel Method for Generating Surrogate Depth Maps using Colour Information" 2009

      2 S. J. Gortler., "The lumigraph" 43-54, 1996

      3 M. Guttmann., "Semi-automatic Stereo Extraction from Video Footage" 136-142, 2009

      4 R. Phan., "Semi-automatic 2D to 3D Image Conversion Using a Hybrid Random Walks and Graph Cuts based Approach" 897-900, 2011

      5 X. Cao., "Semi-Automatic 2D-to-3D Conversion Using Disparity Propagation" 57 (57): 491-499, 2011

      6 X. Hou, "Saliency Detection : A Spectral Residual Approach" 1-8, 2007

      7 J. Kim., "Robust MRF-based Object Tracking and Graph-cut-based Contour Refinement for High Quality 2D to 3D Video Conversion" 358-363, 2011

      8 R. Girshick., "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation" 580-587, 2014

      9 L. Grady, "Random Walks for Image Segmentation" 28 (28): 1768-1783, 2006

      10 X. Sun., "Photo Assessment based on Computational Visual Attention Model" 541-544, 2009

      11 A. Saxena., "Make3D : Learning 3-D Scene Structure from a Single Still Image" 31 (31): 824-840, 2008

      12 M. Levoy., "Light Field Rendering" 31-42, 1996

      13 J. Konrad., "Learning-Based Automatic 2D-to-3D Image and Video Conversion" 22 (22): 3485-3496, 2013

      14 F. Liu., "Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields" PP (PP): 2015

      15 A. Krizhevsky., "ImageNet Classification with deep Convolutional Neural Networks" 309-314, 2012

      16 Ran Liu, "Hole-filling Based on Disparity Map for DIBR" 한국인터넷정보학회 6 (6): 2663-2678, 2012

      17 Y. Boykov., "Graph Cuts and Efficient N-D Image Segmentation" 2 (2): 109-131, 2006

      18 K. Calagari., "Gradient-Based 2D-to-3D Conversion for Soccer Videos" 331-340, 2015

      19 C. Rother., "Grabcut : Interactive Foreground Extraction Using Iterated Graph Cuts" 23 (23): 309-314, 2004

      20 M. M. Cheng., "Global Contrast based Salient Region Detection" 2015

      21 A. Maki., "Geotensity : Combining Motion and Lighting for 3d Surface Reconstruction" 75-90, 2002

      22 S. J. Luo., "Geometrically Consistent Stereoscopic Image Editing Using Patch-Based Synthesis" 21 (21): 56-67, 2014

      23 K. Han., "Geometric and Texture Cue Based Depth-map Estimation for 2D to 3D Image Conversion" 651-652, 2011

      24 J. Lee., "Estimating Scene-Oriented Pseudo Depth with Pictorial Depth Cues" 59 (59): 238-250, 2013

      25 P. Felzenszwalb., "Efficient Graph-Based Image Segmentation" 59 (59): 167-181, 2004

      26 M. Liu., "Discrete-Continuous Depth Estimation from a Single Image" 716-723, 2014

      27 C. Fehn, "Depth-image-based rendering (DIBR), Compression and Transmission for a New Approach on 3D-TV" 5291 : 2004

      28 K. Karsch., "Depth Transfer : Depth Extraction from Video Using Non-Parametric Sampling" 36 (36): 2144-2158, 2014

      29 D. Eigen., "Depth Map Prediction From a Single Image Using a Multi-Scale Deep Network" 2366-2374, 2014

      30 Y. L. Chang., "Depth Map Generation for 2D-to-3D conversion by Short-Term Motion Assisted Color Segmentation" 1958-1961, 2007

      31 J. I. Jung., "Depth Map Estimation from Single-View Image using Object Classification based on Bayesian Learning" 1-4, 2010

      32 C. Yao., "Depth Map Driven Hole Filling Algorithm Exploring Temporal Correction Information" 60 (60): 394-404, 2014

      33 H. Tian., "Depth Inference with Convolutional Neural Network" 169-172, 2014

      34 A. Torralba., "Depth Estimation from Image Structure" 24 (24): 1-13, 2002

      35 R. Rzeszutek., "Depth Estimation for Semi-automatic 2D to 3D Conversion" 817-820, 2012

      36 S. Zhuo., "Defoucs Map Estimation from a Single Image" 44 (44): 1852-1858, 2011

      37 F. Liu., "Deep Convolutional Neural Fields for Depth Estimation from a Single Image" 5162-5170, 2015

      38 "DDD’s TriDef 3D"

      39 J. Malik., "Computing Local Surface Orientation and Shape from Texture for Curved Surfaces" 23 (23): 149-168, 1997

      40 Y. M. Tsai., "Block-based Vanishing Line and Vanishing Point Detection for 3D Scene Reconstruction" 586-589, 2006

      41 L. M. Po., "Automatic 2D-to-3D Video Conversion Technique based on Depth-from-Motion and Color Segmentation" 1000-1003, 2010

      42 F. Guo., "Automatic 2D-to-3D Image Conversion Based on Depth Map Estimation" 8 (8): 99-112, 2015

      43 "ArcSoft’s Media Converter"

      44 Z. Zhang., "An Interactive System of Stereoscopic Video Conversion" 149-158, 2012

      45 M. Li., "An Improved Virtual View Rendering Method Based on Depth Image" 381-384, 2011

      46 S. Knorr., "An Image-Based Rendering(IBR)Approach for Realistic Stereo View Synthesis of TV Broadcast Based on Structure From Motion" 6 : 572-575, 2007

      47 L. H. Wang., "An Asymmetric Edge Adaptive Filter for Depth Generation and Hole Filling in 3DTV" 56 (56): 425-431, 2010

      48 L. Zhang., "Actively Learning Human Gaze Shifting Paths for Semantics-Aware Photo Cropping" 23 (23): 2235-2245, 2014

      49 S. B. Gokturk., "A Time-of-Flight Depth Sensor, System Description, Issues and Solutions" 2004

      50 Y. Lu., "A Survey of Motion-parallax based 3-d Reconstruction Algorithms" 34 : 532-548, 2004

      51 D. Kim., "A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis" 54 (54): 188-197, 2008

      52 C. L. Su., "A Real-time Full-HD 2D-to-3D Conversion System Using Multicore Technology" 273-276, 2011

      53 S. F. Tsai., "A Real-Time 1080p 2D-to-3D Video Conversion System" 57 (57): 915-922, 2011

      54 Y. J. Jung., "A Novel 2D-to-3D Conversion Technique based on Relative Height Depth Cue" 7234 : 2009

      55 C. C. Cheng., "A Novel 2D-to-3D Conversion System Using Edge Information" 56 (56): 1739-1745, 2010

      56 G. S. Lin., "A 2D to 3D Conversion Scheme Based On Depth Cues Analysis For MPEG Videos" 1141-1145, 2010

      57 W. J. Tam., "3D-TV Content Generation : 2D-to-3D Conversion" 1869-1872, 2006

      58 L. Zhang., "3D-TV Content Creation : Automatic 2D-to-3D Video Conversion" 57 (57): 372-383, 2011

      59 A. E. Welchman., "3D Shape Perception from Combined Depth Cues in Human Visual Cortex" 8 : 820-827, 2005

      60 A. Saxena., "3-D Depth Reconstruction from a Single Still Image" 76 (76): 53-69, 2007

      61 J. Ko., "2D-to-3D Stereoscopic Conversion : Depth-Map Estimation in a 2D Single-View Image" 2007

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      학술지등록 한글명 : KSII Transactions on Internet and Information Systems
      외국어명 : KSII Transactions on Internet and Information Systems
      2023 평가예정 해외DB학술지평가 신청대상 (해외등재 학술지 평가)
      2020-01-01 평가 등재학술지 유지 (해외등재 학술지 평가) KCI등재
      2013-10-01 평가 등재학술지 선정 (기타) KCI등재
      2011-01-01 평가 등재후보학술지 유지 (기타) KCI등재후보
      2009-01-01 평가 SCOPUS 등재 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.45 0.21 0.37
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.32 0.29 0.244 0.03
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼