http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Kang, Jiheon,Park, Youn-Jong,Lee, Jaeho,Wang, Soo-Hyun,Eom, Doo-Seop Institute of Electrical and Electronics Engineers 2018 IEEE transactions on industrial electronics Vol.65 No.5
<P>In many water distribution systems, a significant amount of water is lost because of leakage during transit from the water treatment plant to consumers. As a result, water leakage detection and localization have been a consistent focus of research. Typically, diagnosis or detection systems based on sensor signals incur significant computational and time costs, whereas the system performance depends on the features selected as input to the classifier. In this paper, to solve this problem, we propose a novel, fast, and accurate water leakage detection system with an adaptive design that fuses a one-dimensional convolutional neural network and a support vector machine. We also propose a graph-based localization algorithm to determine the leakage location. An actual water pipeline network is represented by a graph network and it is assumed that leakage events occur at virtual points on the graph. The leakage location at which costs are minimized is estimated by comparing the actual measured signals with the virtually generated signals. The performance was validated on a wireless sensor network based test bed, deployed on an actual WDS. Our proposed methods achieved 99.3% leakage detection accuracy and a localization error of less than 3 m.</P>
가속도계와 고정밀 GPS를 이용한 3차원 공간에서의 굴삭기 버킷 정밀 위치 추정
강지헌(Jiheon Kang),최평호(Pyung-Ho Choi),엄두섭(Doo-Seop Eom) 제어로봇시스템학회 2018 제어·로봇·시스템학회 논문지 Vol.24 No.10
This paper describes a method for estimating the angle and position of each component of an excavator. In order to implement the guidance system in the excavator, two real-time kinematic (RTK) global positioning systems (GPSs) as well as pose and tilt sensors were used for precise pin-point estimation of bucket positioning. All devices except GSP well developed and calibrated by us. The sensor calibration procedures were presented in two parts before and after attaching the excavator. We have described three-dimensional global coordinate computation using the length and rotation angle of the body, arm, and bucket of the excavator. In addition, we have presented a method for the calculation of rotation according to the GSS receiver`s attachment error and the estimation of the bucket angle through the sensor attached to the guide link. The experiments were performed by a certified institution, and our proposed system achieved state-of-the-art positioning error of less than 1 cm.
드론 영상분석에서 객체 탐지를 위한 거리기반 앵커박스 선택 및 도자각 보정을 통한 객체 위치인식
강지헌(Jiheon Kang) 제어로봇시스템학회 2021 제어·로봇·시스템학회 논문지 Vol.27 No.10
We proposed a method for selecting an anchor box that considers the distance and size of a real object and localizes the detected object through magnetic declination correction. Deep learning have interesting topic for image analysis to detect small objects in high-resolution images and objects not included in the training dataset. Thus, defining the size, aspect ratio, and the number of anchor boxes is important. This study describes a technique of adaptive anchor box selection, based on the pixel size of an image of a projected object located on the ground, through intrinsic and extrinsic camera parameters. We applied the pre-trained Darknet53 YOLOv4 model for object detection, as the backbone network, and fine-tuned a classifier to classify both human and vehicle classes using a custom dataset. In addition, we proposed a method to minimize the localization error by correcting magnetic declination when converting the pixel coordinates of the detected object into the global coordinate positions using the image-to-ground projection technique. The performance of the proposed design was validated using photos and videos recorded with DJI Mavic 2 Pro. Our proposed method achieved an enhancement of 13% - 43% for detecting small objects that were not included in the training dataset. A localization error of less than 9% was obtained up to 175 m - the distance between the drone and the object.