RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Embedded System 기반 Vision Box 설계와 적용

        이종혁,Lee, Jong-Hyeok 한국정보통신학회 2009 한국정보통신학회논문지 Vol.13 No.8

        비전 시스템은 카메라를 통하여 획득한 이미지 정보를 캡쳐 후, 이를 분석하여 물체를 인식하는 것으로서, 차종 분류를 포함 한 다양한 산업현장에서 사용하고 있다. 이런 필요성으로 인하여 차종 분류를 위한 많은 연구가 이루어지고 있으나 복잡한 계산과정으로 인하여 처리 시간이 많이 소요되는 단점이 있다. 본 논문에서는 임베디드 시스템을 기반으로 하는 Vision Box를 설계하고 이를 사용한 차종인식 시스템을 제안하였다. 제안한 Vision Box의 성능을 자동차의 차종분류를 통한 사전 테스트 결과 최적 화된 환경 조건에서는 100%의 차종별 인식률을 보였으며, 조명 및 회전의 작은 변화에 따른 테스트에서 차종인식은 가능하였으나, 패턴점수가 낮아졌다. 제안한 Vision Box 시스템을 산업 현장에 적용한 결과 처리시간, 인식률 등에서 산업체의 요구 조건을 만족 할 수 있음을 확인할 수 일었다. Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and automobile types recognition is one of them. There have been many research about algorithm of automobile types recognition. But have complex calculation processing. so they need long processing time. In this paper, we designed vision box based on embedded system. and suggested automobile types recognition system using the vision box. As a result of pretesting, this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting and angle, recognition is available but pattern score is lowered. Also, it is observed that the proposed system satisfy the criteria of processing time and recognition rate in industrial field.

      • KCI등재

        GNSS와 Vision System의 최적 융합 분석

        박지호(Chi-Ho Park),김남혁(Nam-Hyeok Kim),박경용(Kyoung-Yong Park) 대한전자공학회 2015 전자공학회논문지 Vol.52 No.3

        본 논문은 GNSS 의 문제점인 위치오차와 실외음영지역을 해소하기 위하여 GNSS와 vision system을 융합한 신뢰성있는 고정밀 측위와 최적의 vision system을 분석하였다. 위치결정을 위해서는 최소 4개 이상의 GNSS로부터 선호를 수신 받아야 한다. 그러나 도심지역에서는 고층건물이나 장애물 또는 반사파에 의해 정확한 위치가 어렵다. 이러한 문제점을 해결하기 위하여 vision system을 이용한다. GNSS를 사용하기 열악한 도심지역의 target object에 정확한 위치 값을 결정해 놓는다.그리고 vision system을 이용해 target object를 인식하고, 인식된 target object를 이용하여 위치오차를 보정해 준다. 이동체는 이동중 vision system을 이용하여 target object를 인식하여 위치 데이터 값을 만들어내고, 위치 계산을 수정하여 안정되고 신뢰성 있는 고정밀 측위를 할 수 있다. This paper proposes an optimnm vision system analysis and a reliable high-precision positioning system that converges a GNSS and a vision system in order to resolve position error and outdoor shaded areas two disadvantages of GNSS. For location determination of the object, it should receive signal from at least four GNSS. However, in urban areas, exact location determination is difficult due to factors like high buildings, obstacles, and reflected waves. In order to deal with the above problem, a vision system was employed. First, determine an exact position value of a target object in urban areas whose environment is poor for a GNSS. Then, identify such target object by a vision system and its position error is corrected using such target object. A vehicle can identify such target object using a vision system while moving, make location data values, and revise location calculations, thereby resulting in reliable high precision location determination.

      • KCI등재

        위성항법시스템과 비전시스템 융합 기술 기반의 신뢰성있는 위치 측위에 관한 연구

        박지호(Chi-Ho Park),권순(Soon Kwon),이충희(Chung-Hee Lee),정우영(Woo-Young Jung) 大韓電子工學會 2011 電子工學會論文誌-TC (Telecommunications) Vol.48 No.10

        이 논문은 위성항법시스템의 문제점인 위치오차와 실외음영지역을 해소하기 위하여 위성항법시스템과 비전시스템을 융합한 신뢰성있는 고정밀 측위 기술을 제안하였다. 동적단독측위에서 이동체는 이동 위치에 따라 사용할 수 있는 위성항법시스템의 수가 변화한다 위치 측위를 위해서는 최소 4개 이상의 위성항법시스템으로부터 위치정보데이터를 수신 받아야 한다. 그러나 도심지역에서는 고층건물이나 장애물 또는 반사파에 의해 정확한 위치측위가 어렵다. 이러한 문제점을 해결하기 위하여 비전 시스템을 이용하였다. 위성항법시스템을 사용하기 열악한 도심지역의 특정 건물에 정확한 위치값을 결정해 놓는다. 그리고 비전시스템을 통해 특정 건물을 인식하고 인식된 건물을 이용하여 위치오차를 보정해 준다. 이동체는 이동하면서 비전시스템을 이용하여 특정 건물을 인식하며 위치 데이터값을 만들어내고 위치계산을 수정하여 안정되고 신뢰성있는 고정밀 위치측위를 할 수 있다. This paper proposes a reliable high-precision positioning system that converges a satellite navigation system and a vision system in order to resolve position errors and outdoor shaded areas two disadvantages of a satellite navigation system. In kinematic point positioning the number of available satellite navigation systems changes in accordance with a moving object's position. For location determination of the object it should receive location data from at least four satellite navigation systems. However in urban areas exact location determination is difficult due to factors like high buildings obstacles and reflected waves. In order to deal with the above problem a vision system was employed. First determine an exact position value of a specific building in urban areas whose environment is poor for a satellite navigation. Then identify such building by a vision system and its position error is corrected using such building. A moving object can identify such specific building using a vision system while moving make location data values and revise location calculations thereby resulting in reliable high precision location determination.

      • KCI등재

        A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

        Yi-Qing Ni,You-Wu Wang,Wei-Yang Liao,Wei-Huan Chen 국제구조공학회 2019 Smart Structures and Systems, An International Jou Vol.24 No.6

        Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.

      • 능동 객체에 기반을 둔 제조 공정 환경에서의 컴퓨터 시각 시스템 통합

        이동우,장덕진,이용진 우송대학교 1997 우송대학교 논문집 Vol.2 No.-

        제조 공정 환경에서의 시스템 통합이란 단순히 부속 시스템들이 함께 동작하는 것을 의미하는 것이 아니고, 기업의 사업 목표를 이루기 위해 함께 일하는 것을 의미한다. 즉, 각 부속 시스템들이 지능적으로 동작하고 다른 시스템들과 적극적으로 협동하는 것이다. 본 논문은 최근 그 응용이 크게 증가하고 있는 컴퓨터 시각 시스템 관점에서 제조공정 시스템들의 통합 문제를 다룬다. 이러한 통합 시스템을 이루기 위해서, 기존의 객체지향 패러다임(paradigm)을 포괄하는 능동 객체 패러다임을 제안한다. 통합된 시스템은 능동 객체와 수동 객체로 분류되는 객체들로 구축된다. 또한, 통합된 시각 시스템의 원형(protype) 시스템 구축을 위한 초기 구조(framework)를 제시한다. The meaning of the integration in manufacturing environments is that all component systems to meet the company business objectives rather than simply work together. Each component should behave intelligently and actively cooperate with other systems. The issues of integration of an manufacturing system are investigated in perspective of a computer vision system which has found an increasing number of applications. To achieve the integration, active object paradigm is proposed, which subsumes conventional object-oriented paradigm. The integrated system is implemented as a collection of objects which are classified into active objects and passive objects. An initial framework for the prototype implementation for an integrated computer vision system is presented.

      • KCI등재

        PRECISE AND RELIABLE POSITIONING BASED ON THE INTEGRATION OF NAVIGATION SATELLITE SYSTEM AND VISION SYSTEM

        C.-H. PARK,N.-H. KIM 한국자동차공학회 2014 International journal of automotive technology Vol.15 No.1

        In this paper, we propose a precise and reliable positioning method for solving common problems, such as anavigation satellite's signal occlusion in an urban canyon and the positioning error due to a limited number of visiblenavigation satellites. This is an integrated system of the navigation satellites system and a vision system. In general, thenavigation satellite positioning system has a fatal weakness in that it can not calculate a position coordinate when its signalis occluded by some obstacle. For this reason, positioning by using the navigation satellites system can not be used for avariety of applications. Therefore, we propose as a method to integrate both the navigation satellites system and the visionsystem. Some target objects that have accurate position coordinates, for example, in an outdoor shaded area like an urbancanyon, are installed into the vision system. When the vision system recognizes a target object it loads the accurate coordinateof that target object. Then, it measures the distance by using the disparity from the camera sensor to the target object. Thesedistance and object coordinate data are used for positioning with the navigation satellites system's data. This integrated systemcan be used for the positioning solution where the user is in unfavorable conditions. This paper shows that the algorithm ofintegrated system and the numerical test performed. The results indicate that the reliable and stable positioning can be obtainedby introducing the vision system to the satellite navigation system.

      • KCI등재후보

        저시력 장애 학우의 특수 교육환경 개선을 위한 교육 보조공학 시스템의 설계 및 구현

        김태완(Kim, Tai Wan),조진수(Cho, Jin Soo) 한국디지털디자인협의회 2013 디지털디자인학연구 Vol.13 No.4

        본 논문에서는 특수 교육환경 내에서 저시력 학우들에게 다양한 교육정보를 효과적으로 전달할 수 있는 교육보조공학시스템을 제안한다. 제안한 시스템은 교육자용 강의 S/W와 저시력인용 수강 S/W로 구성된다. 이 중 교육자용 강의 S/W로는 다양한 강의자료를 즉각적으로저작하여 다수의 저시력인용 수강 S/W에 실시간으로 배포할 수 있으며, 저시력인용 수강 S/W로는 배포된 강의자료에 대한 정보를 개인의 장애 인지특성에 적합하게 변환하여 습득할 수 있다. 이는 기존의 교육 보조기기 혹은 프로그램에서는 제공할 수 없던 제안하는 시스템의 차별화된 기능이다. 또한, 제안하는 시스템의 성능을 검증하기 위한 실험 평가에서는 교사진들이 교육자용 강의 S/W에 대한 기능성과 활용성에 대해 동일하게 4.6점(5점 만점)을 책정하였고, 저시력 학우들은 저시력 인용 수강 S/W의 인지율과 편의성에 각각 4.4.점(5점 만점)과 4.6점(5점 만점)을 책정하였다. 이러한 실험결과로부터 제안하는 시스템의 우수성을 확인하였으며, 이와 함께 실제 특수교육 현장에서의 그 실용 가능성을 함께 타진하였다. 추후, 제안하는 시스템은 저시력인에 대한 교육현장 혹은 세미나와 같은 정보 전달 현장에 용이하게 사용될 수 있을 것으로 예상된다. This paper proposes the education assistive technology system that can effectively deliver various education information to students with low vision in special education environment. The proposed system consists of lecturing S/W for instructors and learning S/W for people with low vision. As for the lecturing S/W for instructors, various lecture materials can be written instantly and be distributed in real time to many learning S/W for people with low vision. As for the learning S/W for people with low vision, information about the distributed lecture materials can be converted in accordance with the cognitive characteristics of the individual’s impairment and be acquired. This is a differentiated function of the proposed system that the existing education assistive device or program could not provide. Also, as the results of experimental evaluation to verify the performance of the proposed system, the teaching staff identically gave a score of 4.6 (out of 5) for functionality and usability of the lecturing S/W for instructors. Students with low vision gave a score of 4.4 (out of 5) for recognition rate and 4.6 (out of 5) for convenience of the learning S/W for people with low vision. These experimental results confirmed the excellence of the proposed system and examined its potential to be of practical use in the actual special education environment. In the future, the proposed system is expected to be used easily at the site of information delivery such as education environment and seminars for people with low vision.

      • Design of Vision/INS Integrated Navigation System in Poor Vision Navigation Environments

        Youngsun Kim,Dong-Hwan Hwang 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10

        This paper proposes a design method for an inertial and landmark-based vision integrated navigation system in poor vision navigation environments, which have fewer visible landmarks in an area. An indirect Kalman filter with feedback structure is used in the presented navigation system. The system is designed to use focal plane measurements of landmarks instead of a vision navigation solution for effective measurement usage in poor vision environments. The proposed design is confirmed through the computer simulations and its performance is evaluated comparing the integration system which uses a vision navigation solution.

      • SCOPUSKCI등재

        Vision Sensor Technology Trends for Industrial Inspection System

        김기수(Kisoo Kim),박준(June Park) Korean Society for Precision Engineering 2021 한국정밀공학회지 Vol.38 No.12

        The fourth industrial revolution is rapidly emerging as a new innovation trend for industrial automation. Accordingly, the demand for inspection equipment is highly increasing and vision sensor technologies are continuously evolving. Machine vision algorithms applied to deep learning are also being rapidly developed to maximize the performance of inspection equipment. In this review, we highlight the recent progress of vision sensor technology for the industrial inspection system. In particular, inspection principles and industrial applications of a vision sensor are classified according to the vision scanning methods. We also discuss machine vision-based inspection techniques containing rule- and deep learning-based image processing algorithms. We believe that this review provides novel approaches for various inspection fields of agriculture, medicine, and manufacturing industries.

      • Vision Tracking System For Mobile Robots Using Two Kalman Filters and a Slip Detector

        Wonsang Hwang,Jaehong Park,Hyun-il Kwon,Muhammad Latif Anjum,Jong-hyeon Kim,Changhun Lee,Kwang-soo Kim,Dong-il “Dan” Cho 제어로봇시스템학회 2010 제어로봇시스템학회 국제학술대회 논문집 Vol.2010 No.10

        The vision tracking system in this paper estimates the robot position relative to a target and rotates the camera towards the target. To estimate the robot position of mobile robot, the system combines information from an accelerometer, a gyroscope, two encoders, and a vision sensor. The encoders can provide fairly accurate robot position information, but the encoder data are not reliable when robot wheels slip. Accelerometer data can provide the robot position information even when the wheels are slipping, but a long term position estimation is difficult, because of integration of errors arising from bias and noise. To overcome the drawbacks of each method mentioned in the above, the proposed system uses data fusion with two Kalman filters and a slip detector. One Kalman filter is for the slip case, and the other is for the no-slip case. Each Kalman filter uses a different sensor combination for estimating the robot motion. The slip detector compares the data from the accelerometer with the data from the encoders, and decides if a slip condition has occurred. Accordingly, based on the decision of the slip detector, the system chooses one of the outputs of the two Kalman filters, which is subsequently used for calculating the camera angle of the vision tracking system. The vision tracking system is implemented on a two-wheeled robot. To evaluate the tracking and recognition performance of the implemented system, experiments are performed for various robot motion scenarios in various environments.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼