http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Time-of-Flight Sensor Calibration for a Color and Depth Camera Pair
Jiyoung Jung,Joon-Young Lee,Yekeun Jeong,Kweon, In So IEEE 2015 IEEE transactions on pattern analysis and machine Vol.37 No.7
<P>We present a calibration method of a time-of-flight (ToF) sensor and a color camera pair to align the 3D measurements with the color image correctly. We have designed a 2.5D pattern board with irregularly placed holes to be accurately detected from low resolution depth images of a ToF camera as well as from high resolution color images. In order to improve the accuracy of the 3D measurements of a ToF camera, we propose to perform ray correction and range bias correction. We reset the transformation of the ToF sensor which transforms the radial distance into the scene depth in Cartesian coordinate through ray correction. Then we capture a planar scene from different depths to correct the distance error that is shown to be dependent not only on the distance but also on the pixel location. The range error profiles along the calibrated distance are classified according to their wiggling shapes and each cluster of profiles with similar shape are separately estimated using a B-spline function. The standard deviation of the remaining random noise is recorded as an uncertainty information of distance measurements. We show the performance of our calibration method quantitatively and qualitatively on various datasets, and validate the impact of our method by demonstrating an RGB-D shape refinement application.</P>
Visual lock-on to invisible target for unmanned aerial vehicle
Jihong Min,Jungho Kim,Yekeun Jung,In So Kweon IET 2012 Electronics letters Vol.48 No.14
<P>Presented is a robust visual lock-on framework for an unmanned aerial vehicle (UAV) that utilises geometric relations between the UAV pose and the 3D local map defined by the positions of the target and natural landmarks. Experimental results using real datasets demonstrate the robustness of the proposed method compared to state-of-the-art visual tracking methods.</P>