RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • SCOPUSKCI등재

        A Study on the Performance Enhancement of Radar Target Classification Using the Two-Level Feature Vector Fusion Method

        Dae-Young Chae,In-Sik Choi,In-Ha Kim 한국전자파학회JEES 2018 Journal of Electromagnetic Engineering and Science Vol.18 No.3

        In this paper, we proposed a two-level feature vector fusion technique to improve the performance of target classification. The proposed method combines feature vectors of the early-time region and late-time region in the first-level fusion. In the second-level fusion, we combine the monostatic and bistatic features obtained in the first level. The radar cross section (RCS) of the 3D full-scale model is obtained using the electromagnetic analysis tool FEKO, and then, the feature vector of the target is extracted from it. The feature vector based on the waveform structure is used as the feature vector of the early-time region, while the resonance frequency extracted using the evolutionary programming-based CLEAN algorithm is used as the feature vector of the late-time region. The study results show that the two-level fusion method is better than the one-level fusion method.

      • KCI등재

        정보보안을 위한 생체 인식 모델에 관한 연구

        김준영(Jun-Yeong Kim),정세훈(Se-Hoon Jung),심춘보(Chun-Bo Sim) 한국전자통신학회 2024 한국전자통신학회 논문지 Vol.19 No.1

        Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion. 생체 인식은 사람의 생체적, 행동적 특징 정보를 특정 장치로 추출하여 본인 여부를 판별하는 기술이다. 생체 인식 분야에서 생체 특성 위조, 복제, 해킹 등 사이버 위협이 증가하고 있다. 이에 대응하여 보안 시스템이 강화되고 복잡해지며, 개인이 사용하기 어려워지고 있다. 이를 위해 다중 생체 인식 모델이 연구되고 있다. 기존 연구들은 특징 융합 방법을 제시하고 있으나, 특징 융합 방법 간의 비교는 부족하다. 이에 본 논문에서는 지문, 얼굴, 홍채 영상을 이용한 다중 생체 인식 모델의 융합 방법을 비교평가했다. 특징 추출을 위해 VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, Inception-v3를 사용했으며, 특성 융합을 위해 ‘Sensor-Level’, ‘Feature-Level’, ‘Score-Level’, ‘Rank-Level’ 융합 방법을 비교 평가했다. 비교 평가 결과 ‘Feature-Level’ 융합 방법에서 EfficientNet-B7 모델이 98.51%의 정확도를 보이며 높은 안정성을 보였다. 그러나 EfficietnNet-B7모델의 크기가 크기 때문에 생체 특성 융합을 위한 모델 경량화 연구가 필요하다.

      • KCI등재

        Real-time surgical tool detection in computer-aided surgery based on enhanced feature-fusion convolutional neural network

        Liu Kaidi,Zhao Zijian,Shi Pan,Li Feng,Song He 한국CDE학회 2022 Journal of computational design and engineering Vol.9 No.3

        Surgical tool detection is a key technology in computer-assisted surgery, and can help surgeons to obtain more comprehensive visual information. Currently, a data shortage problem still exists in surgical tool detection. In addition, some surgical tool detection methods may not strike a good balance between detection accuracy and speed. Given the above problems, in this study a new Cholec80-tool6 dataset was manually annotated, which provided a better validation platform for surgical tool detection methods. We propose an enhanced feature-fusion network (EFFNet) for real-time surgical tool detection. FENet20 is the backbone of the network and performs feature extraction more effectively. EFFNet is the feature-fusion part and performs two rounds of feature fusion to enhance the utilization of low-level and high-level feature information. The latter part of the network contains the weight fusion and predictor responsible for the output of the prediction results. The performance of the proposed method was tested using the ATLAS Dione and Cholec80-tool6 datasets, yielding mean average precision values of 97.0% and 95.0% with 21.6 frames per second, respectively. Its speed met the real-time standard and its accuracy outperformed that of other detection methods.

      • SCOPUSKCI등재

        Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

        Zeng, Hui,Liu, Yanrong,Li, Siqi,Che, JianYong,Wang, Xiuqing Korea Information Processing Society 2018 Journal of information processing systems Vol.14 No.1

        This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

      • 공간 적응적 가중치를 이용한 가시광과 열화상 영상 융합 방법

        Minahil Syeda Zille,Jun-Hyung Kim,Youngbae Hwang 한국차세대컴퓨팅학회 2021 한국차세대컴퓨팅학회 학술대회 Vol.2021 No.05

        In this paper, a deep learning based fusion technique is presented for the visible and infrared image fusion. In general, the image fusion process is composed of three stages: feature extraction by an encoder, feature fusion, and the reconstruction of the fused image by a decoder. We propose a feature fusion scheme that gives spatially adaptive weights to each infrared and visible pair in the fusion process. Features of the infrared image are used to determine the weights based on the observation that only the high activation region in IR contains the salient information. We conduct both quantitative and qualitative analysis on two datasets. Experimental results show that our fusion method achieves better performance than the previous method.

      • KCI등재

        Design of Ensemble Fuzzy-RBF Neural Networks Based on Feature Extraction and Multi-feature Fusion for GIS Partial Discharge Recognition and Classification

        Zhou Kun,Oh Sung-Kwun,Qiu Jianlong 대한전기학회 2022 Journal of Electrical Engineering & Technology Vol.17 No.1

        A new topology of ensemble fuzzy-radial basis function neural networks (EFRBFNN) based on a multi-feature fusion strategy is proposed to recognize and classify a pattern of reliable on-site partial discharge (PD). This study is concerned with the design of an ensemble neural networks based on fuzzy rules and the enhancement of its recognition capability with the aid of preprocessing technologies and multi-feature fusion strategy. The key points are summarized as follows: (1) principal component analysis (PCA) and linear discriminant analysis (LDA) algorithm are utilized to reduce the dimensionality of input space as well as extracting features. (2) statistical characteristics (SC) are obtained as the complementary characteristics of the PD. (3) the proposed network architecture consists of two-branch radial basis function neural networks (RBFNN) based on fuzzy rules, which can eff ectively refl ect the distribution of the input data. Two types of RBFNN are designed which are based on hard c-means (HCM) and fuzzy c-means (FCM) clustering respectively. To fuse the learned features by PCA and LDA, we design a multi-feature fusion strategy that not only adjusts the contribution of diff erent features to the networks but also enhances the recognition ability for PD. The performance of the proposed networks is evaluated using PD data obtained from four types of defects in the laboratory environment, and noise that might occur in power grids is also concerned. The experimental results of the proposed EFRBFNN show the satisfi ed recognition requirement for PD datasets.

      • KCI등재

        MSFM: Multi-view Semantic Feature Fusion Model for Chinese Named Entity Recognition

        Jingxin Liu,Jieren Cheng,Xin Peng,Zeli Zhao,Xiangyan Tang,Victor S. Sheng 한국인터넷정보학회 2022 KSII Transactions on Internet and Information Syst Vol.16 No.6

        Named entity recognition (NER) is an important basic task in the field of Natural Language Processing (NLP). Recently deep learning approaches by extracting word segmentation or character features have been proved to be effective for Chinese Named Entity Recognition (CNER). However, since this method of extracting features only focuses on extracting some of the features, it lacks textual information mining from multiple perspectives and dimensions, resulting in the model not being able to fully capture semantic features. To tackle this problem, we propose a novel Multi-view Semantic Feature Fusion Model (MSFM). The proposed model mainly consists of two core components, that is, Multi-view Semantic Feature Fusion Embedding Module (MFEM) and Multi-head Self-Attention Mechanism Module (MSAM). Specifically, the MFEM extracts character features, word boundary features, radical features, and pinyin features of Chinese characters. The acquired font shape, font sound, and font meaning features are fused to enhance the semantic information of Chinese characters with different granularities. Moreover, the MSAM is used to capture the dependencies between characters in a multi-dimensional subspace to better understand the semantic features of the context. Extensive experimental results on four benchmark datasets show that our method improves the overall performance of the CNER model.

      • KCI등재

        An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

        Huihui Xu,Fei Li 한국정보처리학회 2022 Journal of information processing systems Vol.18 No.6

        The recovery of reasonable depth information from different scenes is a popular topic in the field of computervision. For generating depth maps with better details, we present an efficacious monocular depth predictionframework with coordinate attention and feature fusion. Specifically, the proposed framework containsattention, multi-scale and feature fusion modules. The attention module improves features based on coordinateattention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-levelcontextual features with higher resolution. Moreover, we developed a feature fusion module to combine theheterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function thatmeasures prediction errors from the perspective of depth and scale-invariant gradients, which contribute topreserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation resultsshow that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 forlog10 and 0.992 for δ<1.253 on the NYUv2 dataset.

      • SCOPUSKCI등재

        Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

        ( Hui Zeng ),( Yanrong Liu ),( Siqi Li ),( Jianyong Che ),( Xiuqing Wang ) 한국정보처리학회 2018 Journal of information processing systems Vol.14 No.1

        This paper presents a novel convolutional neural network based multi-feature fusion learning method for nonrigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

      • KCI등재

        Enhanced Face Recognition by Fusion of Global and Local Features under Varying Illumination

        Tan Dat Trinh,Jin Young Kim(김진영),Pham The Bao 한국정보기술학회 2014 한국정보기술학회논문지 Vol.12 No.12

        In this paper, we propose a new method to enhance the performance of face recognition under varying lighting condition. We try to combine strengths of illumination normalization, global and local features, feature-level and score-level fusion. Specially, we introduce two main contributions: 1) Firstly, we propose a feature-level fusion based on global and local Local binary patterns (LBP) features. Kernel PCA (KPCA) is used to reduce the dimension of the combined features. Then these features are used as input of SVM classifier; and 2) we further improve significantly the performance of face recognition by applying score-level fusion between global and local LBP features based SVM. An optimal method based Particle Swarm Optimization (PSO) is used to find optimal weights to fuse the aforementioned information at score-level. The experiment results on Korean face database demonstrate that our proposed methods outperform standard global feature, local feature and other well-know methods. Specifically, the best recognition rate is 100% for indoor images and 94.5% for outdoor images.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼