http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
프로젝션 기반 무안경 방식 멀티뷰 3D 디스플레이에서 구면 렌티큐라 렌즈 시트를 이용하여 재생된 입체영상의 해상도를 증가시키는 광학적 접근 방법
손영섭,김성규,손광훈,이광훈,Sohn, Young-Sub,Kim, Sung-Kyu,Sohn, Kwanghoon,Lee, Kwang-Hoon 한국광학회 2012 한국광학회지 Vol.23 No.4
고정 화소수의 표시소자를 기반으로 하는 무안경 방식 다시점 3D 디스플레이는 시점수 증가에 따른 입체영상 저해상도 문제를 안고 있다. 이를 해결하기 위하여, 본 논문은 프로젝션 기반의 무안경식 다시점 3D 디스플레이 시스템에서 구면 형태의 상용 렌티큐라 렌즈시트를 사용하여 단위화소의 폭을 집속하고 광원수 증가에 따른 유효 해상도를 증가시켜 저해상도의 문제를 해결하는 광학적 접근 방법을 제시하였다. 제시된 방법은 주어진 시스템환경에서 도출 가능한 주요 파라메터의 정의 및 이론적, 실험적 결과를 통하여 축소 가능한 단위화소폭 및 확장 가능한 유효 해상도를 도출하는 수순으로 수행되었다. 결과적으로 1.016 mm의 단위화소폭을 기준으로 25 LPI의 렌티큐라 렌즈 시트를 투과하였을 경우, 축소된 폭(Beam waist)은 0.19 mm, 확장 가능한 유효 해상도는 최대 5배를 나타내었다. 이와 더불어, 초점심도(Depth of focus)는 1.496 mm로서 상용 렌티큐라 렌즈 시트의 두께 허용치 및 광학계 정렬 허용범위를 충분히 확보하였다. Multi-view 3D displays based on a limited number of pixels have the problem that the stereo-scopic image has a low resolution because of increasing view number. To solve the problem of low resolution, we propose an optical approaching method that focuses the width of a unit pixel by using a commercial spherical shape lenticular lens sheet and increases the effective resolution by increasing the number of sources of light in the multi-view 3D display system based on projection type. The method was performed in such an order that several main derivable parameters were defined, and, through the theoretical and experimental result, the value of the contractible unit pixel width and the scalable effective resolution was derived in a given system environment. As a result, for the case that the ray of light from the projector transmitted the 25 LPI lenticular lens sheet which has the pitch size 1.016 mm, the focused unit pixel width was 0.19 mm and the scalable effective resolution was, at most, 5 times wider than the original one. In addition, the range of depth of focus was 1.496 mm, which shows us the range of thickness tolerances of commercial spherical shape lenticular lens sheet and sufficient optical alignment tolerances.
Predictive Virtual Lane Method using Relative Motions between a Vehicle and Lanes
손영섭,정정주,김원희,이승희 제어·로봇·시스템학회 2015 International Journal of Control, Automation, and Vol.13 No.1
We propose a new approach for virtual lane prediction. The main contribution of the proposed method is that the predicted virtual lane can be substituted for lane detection using a camera sensor when the camera image processing fails to detect the lane. The proposed method generates the predicted virtual lane using the relative movement between a vehicle and a lane. To predict the lane, a third-order polynomial function of the longitudinal distance is used as a lane model. Each coefficient of the lane polynomial function at the next sampling time is geometrically calculated using the relative movement of a vehicle, the lanes, the longitudinal velocity and the yaw of the vehicle at the present time. Then, the predictive virtual lane at the next sampling time is obtained without the lane information from the camera sensor at the next sampling time. The proposed method is simple enough that it is suitable for real implementation. The performance of the proposed method was evaluated via experiments with a test vehicle.