http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images
최윤원,권기구,이수인,최정원,이석규 한국전자통신연구원 2014 ETRI Journal Vol.36 No.6
This paper proposes a global mapping algorithmfor multiple robots from an omnidirectional-visionsimultaneous localization and mapping (SLAM) approachbased on an object extraction method using Lucas–Kanade optical flow motion detection and imagesobtained through fisheye lenses mounted on robots. Themulti-robot mapping algorithm draws a global map byusing map data obtained from all of the individual robots. Global mapping takes a long time to process because itexchanges map data from individual robots whilesearching all areas. An omnidirectional image sensor hasmany advantages for object detection and mappingbecause it can measure all information around a robotsimultaneously. The process calculations of the correctionalgorithm are improved over existing methods bycorrecting only the object’s feature points. The proposedalgorithm has two steps: first, a local map is createdbased on an omnidirectional-vision SLAM approach forindividual robots. Second, a global map is generated bymerging individual maps from multiple robots. Thereliability of the proposed mapping algorithm is verifiedthrough a comparison of maps based on the proposedalgorithm and real maps.
어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM
최윤원,최정원,이석규,Choi, Yun Won,Choi, Jeong Won,Lee, Suk Gyu 제어로봇시스템학회 2015 제어·로봇·시스템학회 논문지 Vol.21 No.7
This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.
Multi-Robot Avoidance Control Based on Omni-Directional Visual SLAM with a Fisheye Lens Camera
최윤원,최정원,임성규,Dianwei Qian,이석규 한국정밀공학회 2018 International Journal of Precision Engineering and Vol.19 No.10
This paper proposes a noble avoidance control algorithm based on omni-directional visual simultaneous localization and mapping (OVSLAM) with a fisheye lens camera. In addition, a robot avoids colliding with an obstacle regardless of the obstacle's state by analyzing the information of the object obtained from an OVSLAM approach. OVSLAM has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. We therefore proposed an improved avoidance and formation control to configure a multi-robot system optimized for OVSLAM. This system creates a global map based on vector information and position information of objects obtained from a local map, and determines the avoidance method according to the type of object, which is classified by analyzing the odometry and vector and position information. We carried out a formation control experiment in an environment with static obstacles and a dynamic robot, and a formation control experiment in an environment with dynamic obstacles and a robot. The reliability of the proposed formation algorithm was verified through a comparison of maps based on the proposed algorithm and real maps while maintaining the formation by applying a real robot.
어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지
최윤원,권기구,김종효,나경진,이석규,Choi, Yun-Won,Kwon, Kee-Koo,Kim, Jong-Hyo,Na, Kyung-Jin,Lee, Suk-Gyu 제어로봇시스템학회 2015 제어·로봇·시스템학회 논문지 Vol.21 No.8
This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).