http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Residual Error Acquisition for the Precise Geometric Correction of GMS Images
Hirano, Kenichi,Kaibai, Yasushi,Takagi, Mikio 대한원격탐사학회 2000 International Symposium on Remote Sensing Vol.16 No.1
For using the GMS images effectively, the geometric correction that transforms the GMS images from the image coordinate system into the map coordinate system is necessary. In this paper, to improve the accuracy of the geometric correction, the precise correction in the image coordinate system is proposed. Besides, at the high altitude areas where the location discrepancies are large by the systematic geometric correction, the systematic geometric correction incorporated the altitude data into the reverse transformation of the systematic geometric correction is proposed. And, the residual error acquisition method for the precise geometric correction is investigated. Finally, high precise correct processing is done by the Affine transformation by calculating the Affine coefficient based on measured residual errors.
A Detection Method for Liver Cancer Region Based on Faster R-CNN
Muki FURUZUKI,Huimin LU,Hyoungseop KIM,Yasushi HIRANO,Shingo MABU,Masahiro TANABE,Shoji KIDO 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10
In recent years, liver cancer has become the fourth-largest number of deaths in the world. Surgery is a typical treatment for liver cancer. Therefore, advance information about the number and size of cancer is important for surgery. Multi-phase CT images are well known diagnostic method. By extracting the region of the liver and the region of cancer from the obtained CT image, the shape can be finally restored in 3D. In this paper, as a preliminary step to construct an image analysis method for efficiently extracting cancerous regions in multi-phase CT, we propose a method of obtaining a rectangular region as a rough cancerous region of interest. As a method, after preprocessing the input image, using Faster R-CNN, the region of interest including the cancer region is extracted as a rectangle. As a result of applying this method to 11 cases of arterial phase of multi-phase CT, the detection performance was different depending on the network model adopted for backbone part.
Koki Minami,Huimin Lu,Hyoungseop Kim,Shingo Mabu,Yasushi Hirano,Shoji Kido 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10
Auscultation of respiratory sounds is very important for discovering the respiratory disease. However, there is no quantitative evaluation method for the diagnosis of respiratory sounds until now. It is necessary to develop a system to support the diagnosis of respiratory sounds. In addition, there are few studies using dataset suitable for generating realistic classification models that can be used in clinical sites in algorithm development for automatic analysis of respiratory sounds. We describe the development of an algorithm for the automatic classification of the large-scale respiratory sound dataset used in ICBHI 2017 Challenge as containing crackles, containing wheeze, containing both, and normal. Our approach consists of two major components. Firstly, transformation of one-dimensional signals into two-dimensional time-frequency representation images using short-time Fourier transform and continuous wavelet transform. Secondly, classification of transferred images using convolutional neural networks. In this paper, we apply our proposed method to 920 respiratory sound data, and achieve score of 28[%], harmonic score of 81[%], sensitivity of 54[%] and specificity of 42[%].
Detection of Abnormal Candidate Regions on Temporal Subtraction Images Based on DCNN
Mitsuaki NAGAO,Noriaki MIYAKE,Yuriko YOSHINO,Huimin LU,Joo Kooi TAN,Hyoungseop KIM,Seiichi MURAKAMI,Takatoshi AOKI,Yasushi HIRANO,Shoji KIDO 제어로봇시스템학회 2017 제어로봇시스템학회 국제학술대회 논문집 Vol.2017 No.10
Cancer is a leading cause of death both in Japan and worldwide. Detection of cancer region in CT images is the most important task to early detection. Recently, visual screening based on CT images become useful tools for cancer detection. However, due to the large number of images and the complexity of the image processing algorithms, image processing technique is still required a high screening quality. To overcome this problem, some computer aided diagnosis (CAD) algorithms are proposed. In this paper, we have designed and developed a framework combining machine learning based on deep convolutional neural networks (DCNN) and temporal subtraction techniques based on non-rigid image registration algorithm. Our main classification method can be built into three main steps; i) pre-processing for image segmentation, ii) image matching for registration, and iii) classification of abnormal regions based on machine learning algorithms. We performed our proposed technique to 25 thoracic MDCT sets and obtained true positive rates of 92.31 [%], false positive rates of 6.32 [/case] were obtained.
Extraction of GGO Regions from Chest CT Images Using Deep Learning
Kazuki HIRAYAMA,Noriaki MIYAKE,Huimin LU,Joo Kooi TAN,Hyoungseop KIM,Rie TACHIBANA,Yasushi HIRANO,Shoji KIDO 제어로봇시스템학회 2017 제어로봇시스템학회 국제학술대회 논문집 Vol.2017 No.10
Lung cancer is the leading cause of death which accounts for the number of deaths in cancer in the world. Early detection and early treatment are regarded as an important. Especially, the ground glass opacity (GGO) is a shadow called pre-cancerous lesion, but it is a shadow which is difficult to detect by a radiologist because of haze and complicated shape. Therefore, in recent years, a computer aided diagnosis (CAD) system has been developed for the purpose of improving the detection accuracy for early detection and reducing the burden to radiologists. In this paper, we extract the GGO using Deep Convolutional Neural Network (DCNN) based on emphasized images. Before detect a GGO region, we apply preprocessing such as isotropic voxel to the original images, and extraction of the lung area. Next, we remove the vessel and bronchial region by 3D line filter based on Hessian matrix, and extract the initial candidate regions using density gradient, volume and sphericity. Subsequently, we segment the candidate regions, extraction of features, and reducing false positive shadows. Finally we create emphasize images and identify with DCNN using those images. As a result of applying the proposed method to 31 cases on Lung Image Database Consortium (LIDC), we obtained a true positive rate (TP) of 86.05 [%] and false positive number (FP) of 4.81 [/case].