http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Multisensor Fusion Approach for Vision Aided Navigation of Autonomous Mobile Robot
Taeseok Jin 한국정보통신학회 2014 2016 INTERNATIONAL CONFERENCE Vol.6 No.1
This paper describes a sensor fusion-based navigation method in an indoor environment for an autonomous mobile robot which can navigate and avoid obstacle. Stationary obstacles are avoided with active camera vision and environment is recognized with sonar ring. We will report on experiments in a hallway using the AmigoBot. And we propose a sensor fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the mobile robot itself. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.
3D Walking Human Detection and Tracking based on the IMPRESARIO Framework
TaeSeok Jin,Hideki Hashimoto 한국지능시스템학회 2008 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.8 No.3
In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. To achieve this goal, we propose a method for 3D walking human tracking based on the IMPRESARJO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers have been also presented. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.
Integrated Task Planning based on Mobility of Mobile Manipulator(M2) Platform
TaeSeok Jin,Hyun-Sik Kim,Jong-Wook Kim 한국지능시스템학회 2009 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.9 No.3
This paper presents an optimized integrated task planning and control approach for manipulating a nonholonomic robot by mobile manipulators. Then, we derive a kinematics model and a mobility of the mobile manipulator(M2) platform considering it as the combined system of the manipulator and the mobile robot. to improve task execution efficiency utilizing the redundancy, optimal trajectory of the mobile manipulator(M2) platform are maintained while it is moving to a new task point. A cost function for optimality can be defined as a combination of the square errors of the desired and actual configurations of the mobile robot and of the task robot. In the combination of the two square errors, a newly defined mobility of a mobile robot is utilized as a weighting index. With the aid of the gradient method, the cost function is minimized, so the path trajectory that the M2 platform generates is optimized. The simulation results of the 2 ink planar nonholonomic M2 platform are given to show the effectiveness of the proposed algorithm.
Jin, TaeSeok,Morioka, Kazuyuki,Hashimoto, Hideki Korean Institute of Intelligent Systems 2004 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.4 No.2
Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.
DIND Data Fusion with Covariance Intersection in Intelligent Space with Networked Sensors
TaeSeok Jin,Hideki Hashimoto 한국지능시스템학회 2007 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.7 No.1
Latest advances in network sensor technology and state of the art of mobile robot, and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. In this study, as the preliminary step for developing a multi-purpose “Intelligent Space” platform to implement advanced technologies easily to realize smart services to human. We will give an explanation for the ISpace system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the DIND data fusion with CI of Intelligent Space. We will conclude by discussing some possible future extensions of ISpace. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions tracking multiple objects, human detection and motion assessment, with the results from the simulations run.
Trajectory Generation of a Moving Object for a Mobile Robot in Predictable Environment
TaeSeok Jin,JangMyung Lee 한국정밀공학회 2004 International Journal of Precision Engineering and Vol.5 No.1
In the field of machine vision using a single camera mounted on a mobile robot, although the detection and tracking of moving objects from a moving observer, is complex and computationally demanding task. In this paper, we propose a new scheme for a mobile robot to track and capture a moving object using images of a camera. The system consists of the following modules: data acquisition, feature extraction and visual tracking, and trajectory generation. And a single camera is used as visual sensors to capture image sequences of a moving object. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the active camera. Uncertainties in the position estimation caused by the point¬object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to capture the moving object, the linear and angular velocities are estimated and utilized. The experimental results of tracking and capturing of the target object with the mobile robot are presented.
TaeSeok Jin,MinJung Lee,Hideki Hashimoto 한국지능시스템학회 2006 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.6 No.4
In this paper, a sensor fusion based robot navigation method for the autonomous control of a miniature human interaction robot is presented. The method of navigation blends the optimality of the Fuzzy Neural Network(FNN) based control algorithm with the capabilities in expressing knowledge and learning of the networked Intelligent Robotic Space(IRS). States of robot and IR space, for examples, the distance between the mobile robot and obstacles and the velocity of mobile robot, are used as the inputs of fuzzy logic controller. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a sensor fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process. Preliminary experiment and results are shown to demonstrate the merit of the introduced navigation control algorithm.