http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Optimization‐based humanoid robot navigation using monocular camera within indoor environment
한영중,김인석,홍영대 한국전자통신연구원 2018 ETRI Journal Vol.40 No.4
Robot navigation allows robot mobility. Therefore, mobility is an area of robotics that has been actively investigated since robots were first developed. In recent years, interest in personal service robots for homes and public facilities has increased. As a result, robot navigation within the home environment, which is an indoor environment, is being actively investigated. However, the problem with conventional navigation algorithms is that they require a large computation time for their building mapping and path planning processes. This problem makes it difficult to cope with an environment that changes in real‐time. Therefore, we propose a humanoid robot navigation algorithm consisting of an image processing and optimization algorithm. This algorithm realizes navigation with less computation time than conventional navigation algorithms using map building and path planning processes, and can cope with an environment that changes in real‐time.
한영중(Young-Joong Han),좌동경(Dongkyoung Chwa),홍영대(Young-Dae Hong) 제어로봇시스템학회 2017 제어·로봇·시스템학회 논문지 Vol.23 No.5
Studies on badminton robots have been carried out on new control techniques for the movement of badminton robots in the court. There is no study or analysis on the movement of the shuttlecock after the robots hit the shuttlecock. In this paper, we propose a badminton robot model and a badminton robot optimization considering both the movements of the robot and the shuttlecock for human training. The motion generation of the badminton robot, which sends the shuttlecock to the target point and minimizes the power consumption of the robot, is presented as an optimization problem and a genetic algorithm is used to solve it. In addition, by making it possible to send the shuttlecock to any point in the court with minimum power consumption, it improves utilization as a training robot. We construct the badminton robot model and carry out the evolutionary optimization using a 3D dynamic simulator, Webots, to verify the validity and performance of the proposed method.
천원중,이진엽,민병준,한영이 한국물리학회 2019 THE JOURNAL OF THE KOREAN PHYSICAL SOCIETY Vol.75 No.8
The feasibility of a deep-learning-based super-resolution (SR) model to improve the fiducial marker-tracking accuracy of a stereo portable gamma camera (SPGC) system over the range of an in vivo proton beam was verified in a Monte-Carlo (MC) simulation using the Geometry and Tracking 4 (Geant4) package. The SPGC system is capable of measuring the three-dimensional (3D) position of excited gold markers by detecting proton-induced X-ray emissions (PIXEs) generated by the interactions between the gold marker and a proton beam. The SPGC system was modeled using Geant4 according to manufacturer’s specifications. The original image (Io) acquired by using the SPGC system, which was comprised of 32 × 32 arrays over an area of 104 × 104 mm2, was subjected to resolution enhancement to produce an SR-enhanced image (ISR) (128 × 128 arrays) through a fully trained SR model based on a convolutional neural network (CNN). In virtual experiments, two portable gamma cameras were positioned perpendicular to each other. Next, a pair of Io’s were acquired by detecting the radiations from the exited gold marker positioned in a water phantom. Then, the fully trained SR model improved the quality of the Io’s by converting those to ISR’s. The 3D position of the radiation source was calculated by using Anger logic and 3D vector calculations. Virtual experiments for in vivo proton range verification using the SPGC system were performed by irradiating to a gold marker in a water phantom with a proton beam. A gold marker was placed at five different positions along the Bragg curve of a 100.0-MeV proton beam, which had a range of 74.5 mm in water. The proton beam was irradiated to deliver 20.0 Gy to the gold marker when it was positioned at the center of the Bragg peak; then, the PIXEs were measured by using the SPGC system. When a gold marker was at a different position, it was irradiated with the same dose for a quantitative comparison. Then 3D position of the gold marker was calculated for the original image (Io) and for the high-resolution image (ISR) to compare the detection accuracy. The averaged root-mean-square errors of the five positions between the reference and calculation for Io and ISR were 9.127 mm and 3.991 mm, respectively. In conclusion, the feasibility of using a deep-learning SR model for improving the image resolution of Io and therefore, the tracking accuracy of the SPGC system was validated in MC simulations. The SR model can be applicable to diverse areas of research using gamma camera.
Feasibility Study of the Fluence-to-Dose Network (FDNet) for Patient-Specic IMRT Quality Assurance
천원중,김성진,황의중,민병준,한영이 한국물리학회 2019 THE JOURNAL OF THE KOREAN PHYSICAL SOCIETY Vol.75 No.9
The aim of this study is to predict the delivered dose distribution [Ddelivered(x; y)] with the use of a uence-to-dose network (FDNet) to conduct patient-specic intensity-modulated radiation therapy (IMRT) quality assurance (pQA). The architecture of the FDNet was based on a convolutional neu- ral network. Forty-four IMRT clinical cases of planned dose distributions for pQA [Dplanned(x; y)] and dynamic multileaf collimator (MLC) log les (Dynalog les) were collected. Using the Dynalog les, the expected uence stack [Fexpected(x; y; t)] and the actual uence stack [Factual(x; y; t)] were created from the expected and the actual machine parameters, respectively. The actual uence stack, which was reconstructed from the partial information of the Dynalog le, corresponded to the control points of the Digital Imaging and Communications in Medicine radiation treatment plan and was denoted as [Factual(x; y; tpartial)]. The entire dataset was split into 11 subsets for the k-fold averaging cross-validation (k = 11). Ten (out of the 11) folds were used to train 10 candidate optimal FDNet models, and an ultimate FDNet was determined by averaging the parameters of the optimal models. The pQA was performed using the test data of the remaining fold with the ul- timate FDNet. The dose distributions predicted using Factual(x; y; t) [Dpredicted(Factual(x; y; t))] and Factual(x; y; tpartial) [Dpredicted(Factual(x; y; tpartial))] were acquired. To evaluate the pre- dicted pQA results, we conducted dosimetry using EBT3 lms and an ion-chamber array de- tector (MatriXX). These dose distributions were compared with the Dplanned(x; y) by using a gamma analysis. The average gamma passing rates were determined based on the 3%/3 mm gamma criterion and were, respectively, equal to 98.49%, 97.21%, 97.23%, and 98.03%, for the Dpredicted(Factual(x; y; t)), Dpredicted(Factual(x; y; tpartial)), EBT3 lm, and MatriXX. According to this study, the feasibility of the dose prediction method using the FDNet with complete Dynalog information was veried for the pQA. The respective differences of the average gamma passing rates for the Dpredicted(Factual(x; y; t)), and Dpredicted(Factual(x; y; tpartial)) were equal, respectively, to 1.28% and 2.88% according to the 3%/3 mm and the 2%/2 mm gamma criteria.
이은석,한영중,신병석 한국차세대컴퓨팅학회 2021 한국차세대컴퓨팅학회 논문지 Vol.17 No.1
최근 가상현실 기기와 그래픽 프로세서(GPU)의 발전으로 가상현실 콘텐츠에서 매우 사실적인 3차원 영상을 보여줄 수 있다. 지형 시각화에서는 굴곡진 표면을 3D 메쉬로 재구성한 후 이를 렌더링한다. 최근에는 가상현실 콘텐츠에서 지형을 렌더링할 때 고화질 영상 생성이 가능하도록 광선투사를 기반으로 하는 변위매핑 기법을 주로 사용한다. 가상 현실 콘텐츠에서는 몰입감 향상을 위해 양안시를 지원해야 하므로 두 개의 고해상도 영상이 필요하다. 이 영상들이 계산 오류로 왜곡되거나 빠른 시점 이동으로 지연이 발생 하면 사람은 영상에 대한 이질감을 느끼게 되며, 심할 경우 심한 멀미를 유발하기도 한다. 따라서 렌더링 성능향상을 향상시켜서 속도 저하로 인한 지연 문제를 없애고 같은 속 도에서 더 높은 해상도로 샘플링된 지형을 시각화할 수 있도록 한다. 본 논문에서는 기존의 가상현실 기반 렌더링 시 스템에서 대용량 고도 필드를 활용한 렌더링을 더 빠르고 사실적으로 할 수 있도록 중복된 연산을 제거하는 방법을 제안한다. 이 방법을 사용하면 같은 데이터를 기존의 방법보다 48%~91%까지 빠르게 렌더링할 수 있다. With the advancement of virtual reality devices and graphics processors (GPU), it has become possible to show realistic 3D scenes in virtual reality contents. The terrain data is rendered by reconstructing a 3D mesh to reconstruct the irregular surface data into a 3D image. In recent virtual reality contents, a displacement mapping technique based on raycasting is mainly used for rendering realistic scenes. Virtual reality content requires two high-resolution images that cover the wide field of view of both eyes for HMD (Head Mounted Display). If these scenes are not realistic due to distortion or if there is a delay due to movement of the viewpoint, a user feels a sense of heterogeneity. If such a sense of heterogeneity is severe, there is a problem of causing severe motion sickness. Therefore, the improvement of rendering performance solves the problem of delay due to the low rendering speed, and furthermore, the problem of distortion can be solved by rendering the terrain sampled with a higher resolution at the same speed. In this paper, we propose an acceleration method that can reduce redundant operations so that rendering using a large heightfield can be performed faster and more realistically in the existing virtual reality-based rendering system. When this method is used, the same data can be rendered 48% to 91% faster than the conventional method, so more realistic images can be rendered on the same equipment.