http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
SpaceNet 건물 데이터셋과 Context-based ResU-Net을 이용한 건물 자동 추출
유수홍 ( Suhong Yoo ),김철환 ( Cheol Hwan Kim ),권영목 ( Youngmok Kwon ),최원준 ( Wonjun Choi ),손홍규 ( Hong-gyoo Sohn ) 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.5
건물 정보는 다양한 도시 공간 분석에 활용되는 필수 정보 중 하나이기에 지속적인 모니터링이 필요하지만 현실적으로 어려움이 존재하고 있다. 이를 위해 광범위한 지역에 대해서도 지속적인 관찰이 가능한 위성 영상으로부터 건물을 추출하기 위한 연구가 진행되고 있으며, 최근에는 딥러닝 기반의 시맨틱 세그멘테이션 기법들이 활용되고 있다. 본 연구에서는 SpaceNet의 건물 v2 무료 오픈 데이터를 이용하여 30 cm 급Worldview-3 RGB 영상으로부터 건물을 자동으로 추출하기 위해, context-based ResU-Net의 일부 구조를 변경하여 학습을 진행하였다. 분류 정확도 평가 결과, f1-score가 2회차 SpaceNet 대회 수상작의 분류 정확도보다 높은 것으로 나타났다. 앞으로 지속적으로 Worldview-3 위성 영상을 확보할 수 있다면 본 연구의 성과를 활용하여 전세계 건물 자동 추출 모델을 제작하는 것도 가능할 것으로 판단된다. Building information is essential for various urban spatial analyses. For this reason, continuous building monitoring is required, but it is a subject with many practical difficulties. To this end, research is being conducted to extract buildings from satellite images that can be continuously observed over a wide area. Recently, deep learning-based semantic segmentation techniques have been used. In this study, a part of the structure of the context-based ResU-Net was modified, and training was conducted to automatically extract a building from a 30 cm Worldview-3 RGB image using SpaceNet’s building v2 free open data. As a result of the classification accuracy evaluation, the f1-score, which was higher than the classification accuracy of the 2nd SpaceNet competition winners. Therefore, if Worldview-3 satellite imagery can be continuously provided, it will be possible to use the building extraction results of this study to generate an automatic model of building around the world.
딥러닝 기반 의미론적 분할 기법을 통한 건물 자동추출 연구: 모델의 가중치 경중과 전이학습에 따른 정확도 변화 중심으로
유수홍(Yoo, Suhong),손홍규(Sohn, Hong-Gyoo) 한국측량학회 2023 한국측량학회지 Vol.41 No.6
Building objects are an essential spatial information source that can be used in fields such as 3D modeling, urban expansion, and environmental analysis. They are one of the geographical features for which continuous information construction is essential but are not easy to construct automatically. As a solution to this problem, methods have been proposed to develop new heavy neural networks or utilize transfer learning, but there are still limitations. This study conducted an experiment to determine the models classification performance according to the weight and the possibility of using the transfer learning technique using ImageNet weights in remote sensing. For this purpose, AiHubs land cover map learning dataset was used, and U-Net and Deeplab V3+ classification models using MobileNet and ResNet as backbone neural networks were utilized. As a result of the experiment, the classification accuracy was found to be highest when transfer learning was not performed with the MobileNet-based U-Net model (f1-score: 0.8483). Additionally, visually, it was confirmed that the model learned from scratch rather than transfer learning depicted the building closer to the ground truth. This means that a variety of methods can be used to perform transfer learning without the need to limit the neural network, and it suggests that if there is an amount of data at the level provided by AiHub, a model with a certain level of classification accuracy can be created.
딥러닝 기반 초해상화 모델을 이용한 드론 사진의 해상도 향상 연구
유수홍(Yoo, Suhong),김필립(Kim, Phillip),윤준희(Youn, Junhee) 한국측량학회 2023 한국측량학회지 Vol.41 No.6
In order to go beyond simple geographic feature extraction and analyze socioeconomic functions between nature and people, it is essential to use high-resolution images. High-resolution satellite and aerial images can be used, but continuous image acquisition is problematic due to economic burden. Therefore, a solution is proposed to improve the resolution of images by applying deep learning-based super-resolution technology to remote-sensing images that are relatively easy to acquire. In this study, to increase detection reliability even for small objects, the resolution of drone images that can secure high resolution and economic efficiency was improved through super-resolution. This study used EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) and SRGAN (Super Resolution GAN) models that showed good performance in various benchmarks were used, and the training dataset was acquired using drones and camera equipment owned by the KICT (Korea Institute of Civil Engineering and Building Technology), then produced directly and used for experiments. Quality evaluation of the final result was performed using PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index). As a result of the experiment, EDSR was found to have a PSNR value of 34.24dB and an SSIM value of 0.938, and SRGAN was found to have 36.63dB and 0.954, respectively, showing that SRGAN had the best performance. The values obtained are higher than those cited in the benchmark, which is attributed to the super-resolution training data produced by KICT, providing suitable information for the neural network model. The outcomes of this study are considered to be of sufficient quality for application in subsequent research aimed at enhancing detection and recognition rates.