RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Multi-scale Non-local Feature Enhancement Network for Robust Small-object Detection

        Jun Ho Choi,Seunghyun Lee,Dae Ha Kim,Byung Cheol Song 대한전자공학회 2020 IEIE Transactions on Smart Processing & Computing Vol.9 No.4

        Object detection involves acquiring position information and classification information of objects simultaneously in an image acquired using an image sensor. In general, a small object that occupies a relatively small area within an image is difficult to detect because the information contained in the image is fundamentally inadequate. A person can recognize small objects that are very far away using contextual information, such as the background or relationship with nearby objects. Therefore, it is necessary to enhance the characteristics of small objects using various context information in the image. A new feature enhancement neural network is proposed to enhance the feature maps by extracting the relationships between the non-local features of various sizes. This paper presents an object detection algorithm that is robust against small-object detection based on the feature enhancement neural network. A feature map from the feature-extraction neural network was first branched into multiple feature maps with different receptive fields using respective convolution layers. Non-local relationships between these feature maps were then computed and added to the original feature map for feature enhancement. Finally, the proposed network reflected the overall context information of the image through the enhanced feature map, which is more robust for detecting small objects. The experimental results showed that the proposed method had better performance in small-object detection than the state-of-the-art techniques for the KITTI dataset and PASCAL VOC dataset.

      • KCI등재

        Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network

        ( Jiaquan Shen ),( Ningzhong Liu ),( Han Sun ),( Xiaoli Tao ),( Qiangyi Li ) 한국인터넷정보학회 2019 KSII Transactions on Internet and Information Syst Vol.13 No.4

        Vehicle detection based on aerial images is an interesting and challenging research topic. Most of the traditional vehicle detection methods are based on the sliding window search algorithm, but these methods are not sufficient for the extraction of object features, and accompanied with heavy computational costs. Recent studies have shown that convolutional neural network algorithm has made a significant progress in computer vision, especially Faster R-CNN. However, this algorithm mainly detects objects in natural scenes, it is not suitable for detecting small object in aerial view. In this paper, an accurate and effective vehicle detection algorithm based on Faster R-CNN is proposed. Our method fuse a hyperactive feature map network with Eltwise model and Concat model, which is more conducive to the extraction of small object features. Moreover, setting suitable anchor boxes based on the size of the object is used in our model, which also effectively improves the performance of the detection. We evaluate the detection performance of our method on the Munich dataset and our collected dataset, with improvements in accuracy and effectivity compared with other methods. Our model achieves 82.2% in recall rate and 90.2% accuracy rate on Munich dataset, which has increased by 2.5 and 1.3 percentage points respectively over the state-of-the-art methods.

      • KCI등재

        소형객체 변화탐지를 위한 화소기반 변화탐지기법의 성능 비교분석

        서정훈 ( Junghoon Seo ),박원규 ( Wonkyu Park ),김태정 ( Taejung Kim ) 대한원격탐사학회 2021 大韓遠隔探査學會誌 Vol.37 No.2

        변화탐지 연구는 주로 토지이용/피복의 변화, 재난/재해 피해지역과 같은 토지의 변화, 수역, 식생과 같은 특정 넓게 분포하는 객체의 변화에 대한 연구가 진행되어 왔다. 한편, 위성영상의 공간/시간 해상도가 지속적으로 향상됨에 따라 위성영상으로부터 선박, 차량과 같은 면적이 작은 객체의 변화탐지의 가능성이 높아지고 있다. 이러한 가능성을 확인하기 위하여 본 논문에서는 위성영상으로부터 소형객체 변화탐지를 수행하기 위해 기존 화소기반 변화탐지기법의 성능을 분석하였다. 10일 이내의 짧은 시기에서 촬영된 Kompsat 3A 위성 영상 및 Google Earth 영상을 이용하여 대표적인 화소기반 변화탐지기법인 차분, 주성분 분석, MAD 및 IRMAD을 적용하였다. 영상에서 관측 가능한 소형 객체 주변으로 변화/비변화 참조자료를 정의하고 각 기법을 적용하여 얻어진 변화탐지 결과영상과 참조자료를 비교하여 성능을 분석하였다. 성능분석 결과 실험에 사용한 모든 영상에서 MAD, IR-MAD 기법이 상대적으로 우수한 성능을 제공하였다. LULC, 식생변화 등 대규모 지역의 변화탐지에 우수한 성능을 보인 MAD, IR-MAD 기법이 소형객체의 변화탐지에도 적용될 수 있음을 확인할 수 있었다. 아울러 변화탐지 대상인 소형객체에 높은 반사율 특성을 가지는 분광밴드를 변화탐지를 위한 분석에 포함하는 것이 소형객체 변화탐지율을 높일 수 있었다. Existing change detection researches have been focused on changes of land use and land cover (LULC), damaged areas, or large vegetated and water regions. On the other hands, increased temporal and spatial resolution of satellite images are strongly suggesting the feasibility of change detection of small objects such as vehicles and ships. In order to check the feasibility, this paper analyzes the performance of existing pixel-based change detection methods over small objects. We applied pixel differencing, PCA (principal component analysis) analysis, MAD (Multivariate Alteration Detection), and IR-MAD (Iteratively Reweighted-MAD) to Kompsat-3A and Google Map images taken within 10 days. We extracted ground references for changed and non-changed small objects from the images and used them for performance analysis of change detection results. Our analysis showed that MAD and IRMAD, that are known to perform best over LULC and large areal changes, offered best performance over small object changes among the methods tested. It also showed that the spectral band with high reflectivity of the object of interest needs to be included for change analysis.

      • 소형 물체 인식을 위한 이미지 분할 기법

        조영걸(Younggeol Cho),이준희(Junhee Lee) 한국정보기술학회 2021 Proceedings of KIIT Conference Vol.2021 No.11

        전장 환경에서 감시정찰은 임무 성공을 위해 중요하다. 최근 국방 분야에서는 딥러닝 기반의 물체 인식 기술을 도입하고 있으나, 대부분의 물체 인식 모델은 소형 물체 인식에 어려움을 겪기 때문에 먼 거리의 물체를 감시하는 것이 중요한 감시정찰에 치명적이다. 본 논문에서는 물체 인식 모델의 수정 없이 소형 물체를 인식할 수 있도록 모델 입력 크기보다 큰 해상도의 이미지를 분할하여 입력하고 그 결과를 병합하는 기법을 제안하고 소형 물체 인식 정확도가 향상되었음을 확인하였다. Surveillance and reconnaissance are important for mission success in a battle. Recently, deep learning-based object detection has been introduced in the defense field. But most object detectors have difficulties in detecting small objects, which is critical to surveillance and reconnaissance that require monitoring objects over long distances. This paper suggests a technique to split an input image with larger resolution than the input size of a detector into smaller images, input them to the detector, and merge the results to detect small objects without modifying the model. The proposed method increased the accuracy of small object detection.

      • SCIESCOPUSKCI등재

        Center point prediction using Gaussian elliptic and size component regression using small solution space for object detection

        ( Yuantian Xia ),( Shuhan Lu ),( Longhe Wang ),( Lin Li ) 한국인터넷정보학회 2023 KSII Transactions on Internet and Information Syst Vol.17 No.8

        The anchor-free object detector CenterNet regards the object as a center point and predicts it based on the Gaussian circle region. For each object's center point, CenterNet directly regresses the width and height of the objects and finally gets the boundary range of the objects. However, the critical range of the object's center point can not be accurately limited by using the Gaussian circle region to constrain the prediction region, resulting in many low-quality centers' predicted values. In addition, because of the large difference between the width and height of different objects, directly regressing the width and height will make the model difficult to converge and lose the intrinsic relationship between them, thereby reducing the stability and consistency of accuracy. For these problems, we proposed a center point prediction method based on the Gaussian elliptic region and a size component regression method based on the small solution space. First, we constructed a Gaussian ellipse region that can accurately predict the object's center point. Second, we recode the width and height of the objects, which significantly reduces the regression solution space and improves the convergence speed of the model. Finally, we jointly decode the predicted components, enhancing the internal relationship between the size components and improving the accuracy consistency. Experiments show that when using CenterNet as the improved baseline and Hourglass-104 as the backbone, on the MS COCO dataset, our improved model achieved 44.7%, which is 2.6% higher than the baseline.

      • KCI등재

        Light-weight Deep Neural Network for Small Vehicle Detection using Model-scale YOLOv4

        김민기,김희광,Chanyeong Park,Joonki Paik 대한전자공학회 2023 IEIE Transactions on Smart Processing & Computing Vol.12 No.5

        In this paper, we present a light-weight deep neural network based on an efficiently scaled YOLOv4 model for detecting small objects in drone images. Since drone-captured images mainly contain small objects, we modified the YOLOv4 model by eliminating the head layer responsible for detecting large objects. This modification significantly reduced the model's parameters and processing time for non-maximum suppression (NMS). Moreover, the appropriately scaled model for small object detection can be used on a drone. To achieve a light-weight network for small object detection with minimal performance degradation, we used the attention stacked hourglass network (ASHN) for feature fusion. In extensive experiments, the proposed network outperformed the baseline network in several datasets.

      • 임베디드 디바이스 상에서 작은 물체 탐지를 위한 주성분 분석기반 YOLO

        이한음(Han-Eum Lee),허청환(Cheong-Hwan Hur),김현석(Hyun-Seok Kim),황현택(Hyeon-Taek Hwang),강상길(Sang-Gil Kang) 한국정보기술학회 2021 Proceedings of KIIT Conference Vol.2021 No.6

        물체 감지는 컴퓨터 비전 분야에서 활발한 연구 분야로 남아 있으며 물체 감지를 해결하기 위한 심층 컨볼루션 신경망 설계를 통해 이 분야에서 상당한 발전과 성공을 거두었다. 이러한 성공에도 불구하고 임베디드 시나리오에서 작은 물체 감지를 위한 네트워크 개발에 가장 큰 장애물 중 하나는 작은 물체에 대한 특징 추출의 어려움이다. 이 작업에서는 작은 물체 감지 작업을 위해 고도로 세분화 된 특징을 추출하는 심층 컨볼루션 신경망 인 PCA 기반 YOLO를 제안한다. 일반적으로 이미지 특징 추출을 위한 컨볼루션 레이어는 이미지의 공통된 특징을 추출하고 정보를 완전 연결 계층으로 전달한다. 여기서 작은 물체에 대한 대부분의 정보가 손실되어 분류가 매우 어렵기 때문에 입력 데이터에 대한 정보를 증폭하여 정보 손실 문제를 해결하기 위해 PCA를 이 프로세스에 통합하여 PCA 기반 YOLO 네트워크를 구축한다. Jetson AGX Xavier 임베디드 모듈에 해당 네트워크를 구축하고, 실험 섹션에서는 총 9가지, 4783개의 드론 이미지를 사용하여 다양한 실험을 수행하여 방법론을 보여준다. Object detection remains an active research area in the field of computer vision, and has achieved significant advances and successes in this field through the design of deep convolutional neural networks to address object detection. Despite this success, one of the biggest obstacles to the development of networks for small object detection in edge and embedded scenarios is the difficulty of feature extraction for small objects. In this work, we introduce PCA-based YOLO, a deep convolutional neural network that extracts highly granular features for small object detection tasks. In general, convolutional layers for feature extraction of images extract common features of images and pass information to fully connected layers, where most of the information on small objects is lost, making classification very difficult. We build a PCA-based YOLO network by integrating PCA into this process to solve the information loss problem by amplifying information about input data. We build the corresponding network on the Jetson AGX Xavier embedded module, and in the experimental section, we perform various experiments using a total of 4783 drone images of nine kinds, showing the methodology.

      • KCI등재

        Developing and Evaluating Deep Learning Algorithms for Object Detection: Key Points for Achieving Superior Model Performance

        Oh Jang-Hoon,Kim Hyug-Gi,Lee Kyung Mi 대한영상의학회 2023 Korean Journal of Radiology Vol.24 No.7

        In recent years, artificial intelligence, especially object detection-based deep learning in computer vision, has made significant advancements, driven by the development of computing power and the widespread use of graphic processor units. Object detection-based deep learning techniques have been applied in various fields, including the medical imaging domain, where remarkable achievements have been reported in disease detection. However, the application of deep learning does not always guarantee satisfactory performance, and researchers have been employing trial-and-error to identify the factors contributing to performance degradation and enhance their models. Moreover, due to the black-box problem, the intermediate processes of a deep learning network cannot be comprehended by humans; as a result, identifying problems in a deep learning model that exhibits poor performance can be challenging. This article highlights potential issues that may cause performance degradation at each deep learning step in the medical imaging domain and discusses factors that must be considered to improve the performance of deep learning models. Researchers who wish to begin deep learning research can reduce the required amount of trial-and-error by understanding the issues discussed in this study.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼