http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
밝기값의 표준편차를 이용한 위성영상에서의 정합 파라메터 결정방법
이수암(Sooahm Rhee),수야준(Yajun Suai),김태정(Taejung Kim) 대한공간정보학회 2009 한국지형공간정보학회 학술대회 Vol.2009 No.4
영역을 기반으로 한 스테레오 정합의 경우 설정하는 영역의 크기 및 상관계수의 임계값이 정합의 정확도를 결정지을 수 있는 중요한 요소이다. 본 실험에서는 영상 밝기값의 표준편차를 이용하여 영상에서의 영역의 분리를 시도하였으며, 표준편차의 임계값을 이용해 산악과 도심지역을 구분할 수 있음을 확인하였다. 또한 표준편차의 값에 따른 적응적인 정합 영역의 크기를 설정하여 정합 정확도의 향상을 시도하였으며, 영역내의 표준편차 값이 임계값 보다 클 경우 이 지역을 도심지역으로 판단하여 영상의 밝기값이 아닌 에지를 이용하여 정합을 시도하였다. 실험 결과 제안된 방식이 고정 파라메터를 쓰는 방식에 비해 더욱 정확한 DEM의 제작이 가능함을 확인할 수 있었다.
무인항공기 영상을 위한 영상 매칭 기반 생성 포인트 클라우드의 후처리 방안 연구
이수암,김한결,김태정,Rhee, Sooahm,Kim, Han-gyeol,Kim, Taejung 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.6
In this paper, we propose a post-processing method through interpolation of hole regions that occur when extracting point clouds. When image matching is performed on stereo image data, holes occur due to occlusion and building façade area. This area may become an obstacle to the creation of additional products based on the point cloud in the future, so an effective processing technique is required. First, an initial point cloud is extracted based on the disparity map generated by applying stereo image matching. We transform the point cloud into a grid. Then a hole area is extracted due to occlusion and building façade area. By repeating the process of creating Triangulated Irregular Network (TIN) triangle in the hall area and processing the inner value of the triangle as the minimum height value of the area, it is possible to perform interpolation without awkwardness between the building and the ground surface around the building. A new point cloud is created by adding the location information corresponding to the interpolated area from the grid data as a point. To minimize the addition of unnecessary points during the interpolation process, the interpolated data to an area outside the initial point cloud area was not processed. The RGB brightness value applied to the interpolated point cloud was processed by setting the image with the closest pixel distance to the shooting center among the stereo images used for matching. It was confirmed that the shielded area generated after generating the point cloud of the target area was effectively processed through the proposed technique.
가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구
이유진,이수암,Lee, Yoo Jin,Rhee, Sooahm 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.6
This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.
다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법
김철욱,임평채,지준화,김태정,이수암,Kim, Cheolwook,Lim, Pyeong-chae,Chi, Junhwa,Kim, Taejung,Rhee, Sooahm 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.6
In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.
자동 위성영상 수집을 통한 다종 위성영상의 시계열 데이터 생성
남윤지,정성우,김태정,이수암,Yunji Nam,Sungwoo Jung,Taejung Kim,Sooahm Rhee 대한원격탐사학회 2023 大韓遠隔探査學會誌 Vol.39 No.5
Time-series data generated from satellite data are crucial resources for change detection and monitoring across various fields. Existing research in time-series data generation primarily relies on single-image analysis to maintain data uniformity, with ongoing efforts to enhance spatial and temporal resolutions by utilizing diverse image sources. Despite the emphasized significance of time-series data, there is a notable absence of automated data collection and preprocessing for research purposes. In this paper, to address this limitation, we propose a system that automates the collection of satellite information in user-specified areas to generate time-series data. This research aims to collect data from various satellite sources in a specific region and convert them into time-series data, developing an automatic satellite image collection system for this purpose. By utilizing this system, users can collect and extract data for their specific regions of interest, making the data immediately usable. Experimental results have shown the feasibility of automatically acquiring freely available Landsat and Sentinel images from the web and incorporating manually inputted high-resolution satellite images. Comparisons between automatically collected and edited images based on high-resolution satellite data demonstrated minimal discrepancies, with no significant errors in the generated output.