http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Visual-Inertial Odometry Priors for Bundle-Adjusting Neural Radiance Fields
Hyunjin Kim,Minkyeong Song,Daekyeong Lee,Pyojin Kim 제어로봇시스템학회 2022 제어로봇시스템학회 국제학술대회 논문집 Vol.2022 No.11
We present bundle-adjusting Neural Radiance Fields (BARF) with motion priors. Neural Radiance Field (NeRF) has opened up tremendous potential for neural volume rendering and 3D scene representations in recognition of their ability to synthesize photo-realistic novel views. BARF mitigates NeRF’s reliance on accurate 6-DoF camera poses, enabling scene learning with inaccurate camera poses. However, initializing estimates far from an optimal solution, such as BARF, can easily fall into local minima. We utilize Visual-Inertial Odometry Motion Priors to the BARF, which jointly optimizes 3D scene representations and camera poses, providing higher accuracy in view synthesis and a more stable motion estimate. The proposed method achieves results that outperform original BARF in real-world data, demonstrating the effectiveness of motion priors to knowledge use.
이상일(Sangil Lee),김표진(Pyojin Kim),김창현(Changhyeon Kim),이현범(Hyeonbeom Lee),김현진(H. Jin Kim) 제어로봇시스템학회 2017 제어·로봇·시스템학회 논문지 Vol.23 No.6
This paper surveys visual odometry technology for unmanned systems. Visual odometry is one of the most important technologies to implement vision-based navigation; therefore, it is widely applied to unmanned systems in recent years. Visual odometry estimates a trajectory and a pose of the system, and it could be classified into the following: 1) stereo vs. monocular, 2) feature-based or indirect vs. direct, and 3) linear vs. nonlinear based on the number of cameras, information attributes, and the optimization process, respectively. In the paper, we discuss the state-of-the-art issues of research activities related to visual odometry and summarize future direction for the research.
Parsing Indoor Manhattan Scenes Using Four-Point LiDAR on a Micro UAV
Eunju Jeong,Suyoung Kang,Daekyeong Lee,Pyojin Kim 제어로봇시스템학회 2022 제어로봇시스템학회 국제학술대회 논문집 Vol.2022 No.11
We propose the first 3D mapping algorithm using four-point LiDAR for a micro unmanned aerial vehicle (UAV). Existing mapping approaches depend on 360° 2D laser scanner and RGB-D camera, which are unsuitable for micro UAV with a small payload. The proposed method builds a 3D structure map with an accumulated point cloud obtained from low-cost and lightweight four ToF sensors suitable for micro UAV in four directions: front, back, left, and right. The noise of range measurement by the low-cost ToF sensor and inaccurate 6-DoF pose estimation of Crazyflie make a noisy point cloud. We overcome these problems by utilizing the geometric constraints of the interior structures, the Manhattan world (MW), and the proposed method successfully parse the floor plan of the Manhattan scenes. We evaluate the proposed method in various MW structures and demonstrate that the proposed method produces comparable results to the ROS Gmapping algorithm, which uses a 360° 2D laser scanner.