http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Effectiveness Evaluation of View-Based Navigation for Obstacle Avoidance
Yoshinobu Hagiwara,Tetsunari Inamura,Yongwoon Choi 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10
In this study, we propose an improved view-based navigation method for obstacle avoidance and evaluate the effectiveness of the proposed method in actual environments existing obstacles. The proposed method is able to estimate the position and the rotation of a mobile robot, even if the mobile robot strays from a recording path for avoiding obstacles. In order to achieve this, ego-motion was employed to the view-based navigation. The ego-motion is calculated from SURF points between a current view and a recorded view by using a Kinect sensor. In conventional view-based navigations, it is difficult to generate other paths to avoid obstacles. By using the proposed method, it is expected that a mobile robot plans flexible paths and avoids persons and objects in actual environments. From experiments performed in an indoor environment, we evaluated the measurement accuracy of the proposed method, and confirmed the feasibility of the proposed method for actual robot navigation.
Monocular 3D Palm Posture Estimation Based on Feature-Points Robust against Finger Motion
Yoshiaki Mizuchi,Yoshinobu Hagiwara,Akimasa Suzuki,Hiroki Imamura,Yongwoon Choi 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10
The usability of wearable augmented reality (AR) systems would improve by letting users arbitrarily display virtual information on their palm and simultaneously manipulate it as tablet computers or smartphones. To realize such interaction between users and virtual information, we aim to robustly estimate 3-D palm posture against finger motion. This is based on the assumption that finger motion is separately estimated from palm posture and applied to manipulation of displayed information. In addition, the capability of electric sources, sensors, and processors are very limited in wearable computers. For this reason, by using a monocular camera and estimating palm posture from only a few image feature-points, we achieve an efficient estimation that satisfies real-time constraint on wearable computers. The accuracy and the robustness of our method are demonstrated by qualitative and quantitative comparison with a widely-used cardboard maker. Additionally, we confirmed that our method is run on a mobile computer at the average of 12.44 msec per frame.
Improvement of Template Matching for Distance Measurement System Based on Image Sensors
Akio Kita,Yoshinobu Hagiwara,Yongwoon Choi,Kazuhiro Watanabe 제어로봇시스템학회 2012 제어로봇시스템학회 국제학술대회 논문집 Vol.2012 No.10
This paper describes an improved template matching for a distance measurement system based on image sensors using a target. Authors have developed the distance measurement system that consists of two movable image sensors for automatic berthing of ships. The measurement system measures a distance by detecting and tracking the target. We aim at applying the distance measurement system for automatic relative positioning system between a ship and a FPSO. In this application, the shape of the target captured in images is deformed by their relative positions and attitudes and then measurements error is increased due to the target deformation. For solving the deformation, we propose an improved template matching robust to target deformation. Improved template matching is able to detect the deformed target with template database which is created by image conversion with perspective projection of a reference template. By using improved template matching, the distance measurement error is decreased. From the experimental results performed in miniature and indoor environments, we confirmed that the measurement error of the relative distance is decreased by using the database.
Development of a learning support system with PaPeRo
Nozomi Fujiwara,Yoshinobu Hagiwara,Yongwoon Choi 제어로봇시스템학회 2012 제어로봇시스템학회 국제학술대회 논문집 Vol.2012 No.10
We propose a learning support system with a robot for asynchronous learning. The proposed system consists of a teacher, a user, a robot, a personal computer, an educational material, and a web camera. In asynchronous learning, the timing to provide the supports such as hints and answers in learning is one of important factors. Here, we endowed a robot with this roll on the timing estimated by extracting the features of usual facial expressions or behaviors of a user in an image captured by a web camera. Thus, we can expect that the robot may give a user learning supports in real time. In fact, we developed the base system that a robot provides hints in the timing estimated by extracting the human general feature tilting own head when we don’t understand something. We attempted to evaluate the usefulness for the base system through the experiments of English words learning and some questionnaires for college students. In these experiments, we used a communication robot, PaPeRo, for confirming the counteraction and effect of users. This paper is discussed on the experimental results of the questionnaires and the possibility that user’s motivation can be improved by the affinity of the robot in the system.
Indoor Human Navigation System on Smartphones using View-Based Navigation
Mitsuaki Nozawa,Yoshinobu Hagiwara,Yongwoon Choi 제어로봇시스템학회 2012 제어로봇시스템학회 국제학술대회 논문집 Vol.2012 No.10
We present a new human navigation system on smartphones for indoor environments. The conventional systems have required specific positioning infrastructures installed in buildings. In contrast, the proposed system is not to need such requirements because it can determine the position of a user by image matching to process the images from a smartphone camera indoors like a certain corridor. Although a number of personal positioning methods based on image processing have been proposed, it has been difficult in processing cost to apply them just like that to the proposed system. From this point of view, we have focused on the view-based navigation which has been developed as an indoor positioning method for robots due to its simple algorithm by only one monocular camera. By devising the techniques for applying this view-based navigation to a smartphone, we can visually provide the user having a smartphone the positional and directional information to a destination. This paper describes the algorithm of the view-based navigation speeded up for smartphones and shows experimental results for human navigation performed with usually walking in an indoor corridor.
Distance Measurement System using Stereo Camera for Automatic Ship Control
Tadashi Ogura,Yoshiaki Mizuchi,Yoshinobu Hagiwara,Youngbok Kim,Akimasa Suzuki,Yongwoon Choi,Kazuhiro Watanabe 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10
A long distance measurement system using a stereo camera mounted on a rotation control device is proposed for automatic ship control, and the performance of the system is presented to verify the usefulness of it. As a solution to prevent ship operators from maritime accidents owing to their strain for a long period, the distance measurement system is often required to automatically control the posture or berthing of a ship. Here, we discuss the system structure suitable for used environment and its experimental results. From the experiment conducted to imagine the distance measurement necessary to altering the posture and translational motion between a ship and its facilities, the average error for the range of 20 to 100 m was less than 1 percent.
Yoshiaki Mizuchi,Tadashi Ogura,Young-Bok Kim,Yoshinobu Hagiwara,Yongwoon Choi 제어로봇시스템학회 2015 제어로봇시스템학회 국제학술대회 논문집 Vol.2015 No.10
In this study, we propose a measurement system consists of two pairs of multiple cameras mounted on a pan-tilt unit and a landmark installed on a target side. The purpose of the proposed system is to measure relative position and heading angle of a vessel to a target at a close distance, in order to reduce collision risks by automation of vessel positioning that requires high measurement accuracy and rate. To achieve such measurement, cameras that have wide sensing range and high angular resolution are used instead of GPS, radars, or laser sensors. The proposed system also has the ability of accurate and fast measurement of the distance and direction angle to a target by applying a simple and robust target detection method. The position and heading of a vessel are determined from two pairs of the distance and direction angle. To evaluate the measurement accuracy of the proposed system, we measured position and heading of the system relative to landmarks on several conditions including positional displacement and rotation. The experimental results demonstrate that the proposed system can be used for automation of vessel positioning at a close distance.