RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Content-Based Image Retrieval of Chest CT with Convolutional Neural Network for Diffuse Interstitial Lung Disease: Performance Assessment in Three Major Idiopathic Interstitial Pneumonias

        Hwang Hye Jeon,Seo Joon Beom,Lee Sang Min,Kim Eun Young,Park Beomhee,Bae Hyun-Jin,Kim Namkug 대한영상의학회 2021 Korean Journal of Radiology Vol.22 No.2

        Objective: To assess the performance of content-based image retrieval (CBIR) of chest CT for diffuse interstitial lung disease (DILD). Materials and Methods: The database was comprised by 246 pairs of chest CTs (initial and follow-up CTs within two years) from 246 patients with usual interstitial pneumonia (UIP, n = 100), nonspecific interstitial pneumonia (NSIP, n = 101), and cryptogenic organic pneumonia (COP, n = 45). Sixty cases (30-UIP, 20-NSIP, and 10-COP) were selected as the queries. The CBIR retrieved five similar CTs as a query from the database by comparing six image patterns (honeycombing, reticular opacity, emphysema, ground-glass opacity, consolidation and normal lung) of DILD, which were automatically quantified and classified by a convolutional neural network. We assessed the rates of retrieving the same pairs of query CTs, and the number of CTs with the same disease class as query CTs in top 1–5 retrievals. Chest radiologists evaluated the similarity between retrieved CTs and queries using a 5-scale grading system (5-almost identical; 4-same disease; 3-likelihood of same disease is half; 2-likely different; and 1-different disease). Results: The rate of retrieving the same pairs of query CTs in top 1 retrieval was 61.7% (37/60) and in top 1–5 retrievals was 81.7% (49/60). The CBIR retrieved the same pairs of query CTs more in UIP compared to NSIP and COP (p = 0.008 and 0.002). On average, it retrieved 4.17 of five similar CTs from the same disease class. Radiologists rated 71.3% to 73.0% of the retrieved CTs with a similarity score of 4 or 5. Conclusion: The proposed CBIR system showed good performance for retrieving chest CTs showing similar patterns for DILD.

      • SCIESCOPUS

        Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets

        Ahmad, Jamil,Muhammad, Khan,Bakshi, Sambit,Baik, Sung Wook North-Holland 2018 Future generations computer systems Vol.81 No.-

        <P><B>Abstract</B></P> <P>Large scale visual surveillance generates huge volumes of data at a rapid pace, giving rise to massive image repositories. Efficient and reliable access to relevant data in these ever growing databases is a highly challenging task due to the complex nature of surveillance objects. Furthermore, inter-class visual similarity between vehicles requires extraction of fine-grained and highly discriminative features. In recent years, features from deep convolutional neural networks (CNN) have exhibited state-of-the-art performance in image retrieval. However, these features have been used without regard to their sensitivity to objects of a particular class. In this paper, we propose an object-oriented feature selection mechanism for deep convolutional features from a pre-trained CNN. Convolutional feature maps from a deep layer are selected based on the analysis of their responses to surveillance objects. The selected features serve to represent semantic features of surveillance objects and their parts with minimal influence of the background, effectively eliminating the need for background removal procedure prior to features extraction. Layer-wise mean activations from the selected features maps form the discriminative descriptor for each object. These object-oriented convolutional features (OOCF) are then projected onto low-dimensional hamming space using locality sensitive hashing approaches. The resulting compact binary hash codes allow efficient retrieval within large scale datasets. Results on five challenging datasets reveal that OOCF achieves better precision and recall than the full feature set for objects with varying backgrounds.</P> <P><B>Highlights</B></P> <P> <UL> <LI> Proposed to represent vehicle images with appropriate convolutional features. </LI> <LI> Our method reduces number of feature maps without performance degradation. </LI> <LI> Selected features yield better retrieval performance than the full feature set. </LI> </UL> </P>

      • KCI등재후보

        뉴럴 네트워크를 활용한 지식 정보 검색 연구 현황

        박혜진,한요섭 전북대학교 문화융복합아카이빙연구소 2022 디지털문화아카이브지 Vol.5 No.1

        Knowledge retrieval is an important component for various applications including machine learning. Recently, there are several researches on neural networks and knowledge graphs for better semantic searching performance. We review recent knowledge retrieval methods using term matching-based, neural network-based and neural-symbolic-based knowledge approaches. We also briefly discuss a possible future research directions. 지식 정보 검색은 머신 러닝을 비롯한 여러 응용 프로그램의 기본 구성 요소이다. 최근 지식 정보 검색 분야에서는 시멘틱 검색 능력 향상을 위하여 뉴럴 네트워크와 지식 그래프를 기반으로한 연구가 활발히 진행되고 있다. 본 논문에서는 지식 정보 검색의 대표 기법인 용어 일치 기반 검색, 뉴럴 네트워크 기반 검색 그리고 지식 그래프의 상징성과 뉴럴 네트워크의 연결성을 결합한 지식 정보 검색 연구의 최근 현황을 소개하고 향후 연구 방향도 제안한다.

      • KCI등재

        A Study on Weeds Retrieval based on Deep Neural Network Classification Model

        Vo Hoang Trong,Gwang-Hyun Yu,Dang Thanh Vu,Ju-Hwan Lee,Nguyen Huy Toan,Jin-Young Kim 한국정보기술학회 2020 한국정보기술학회논문지 Vol.18 No.8

        In this paper, we study the ability of content-based image retrieval by extracting descriptors from a deep neural network (DNN) trained for classification purposes. We fine-tuned the VGG model for the weeds classification task. Then, the feature vector, also a descriptor of the image, is obtained from a global average pooling (GAP) and two fully connected (FC) layers of the VGG model. We apply the principal component analysis (PCA) and develop an autoencoder network to reduce the dimension of descriptors to 32, 64, 128, and 256 dimensions. We experiment weeds species retrieval problem on the Chonnam National University (CNU) weeds dataset. The experiment shows that collecting features from DNN trained for weeds classification task can perform well on image retrieval. Without applying dimensionality reduction techniques, we get 0.97693 on the mean average precision (mAP) value. Using autoencoder to reduced dimensional descriptors, we achieve 0.97719 mAP with the descriptor dimension is 256.

      • SCIESCOPUS

        Cortical network dynamics during source memory retrieval: Current density imaging with individual MRI

        Kim, Young Youn,Roh, Ah Young,Namgoong, Yoon,Jo, Hang Joon,Lee, Jong-Min,Kwon, Jun Soo Wiley Subscription Services, Inc., A Wiley Company 2009 HUMAN BRAIN MAPPING Vol.30 No.1

        <P>We investigated the neural correlates of source memory retrieval using low-resolution electromagnetic tomography (LORETA) with 64 channels EEG and individual MRI as a realistic head model. Event-related potentials (ERPs) were recorded while 13 healthy subjects performed the source memory task for the voice of the speaker in spoken words. The source correct condition of old words elicited more positive-going potentials than the correct rejection condition of new words at 400–700 ms post-stimulus and the old/new effects also appeared in the right anterior region between 1,000 and 1,200 ms. We conducted source reconstruction at mean latencies of 311, 604, 793, and 1,100 ms and used statistical parametric mapping for the statistical analysis. The results of source analysis suggest that the activation of the right inferior parietal region may reflect retrieval of source information. The source elicited by the difference ERPs between the source correct and source incorrect conditions exhibited dynamic change of current density activation in the overall cortices with time during source memory retrieval. These results indicate that multiple neural systems may underlie the ability to recollect context. Hum Brain Mapp 2009. © 2007 Wiley-Liss, Inc.</P>

      • KCI등재

        Spatially Weighted Convolutional Feature Aggregation for Image Retrieval

        Enkhbayar Erdenee,강상길 한국엔터프라이즈아키텍처학회 2017 정보기술아키텍처연구 Vol.14 No.3

        In this paper, we introduce a simple but efficient method to construct powerful imagerepresentation via spatially weighted deep convolutional feature aggregation for image retrieval. First,the convolutional features are extracted from the convolutional layer of the CNNs and proposed spatialweights are applied to the convolutional features. A sum pooling is performed to aggregate the spatiallyweighted convolutional features into the global image representation. We carry out extensiveexperiments on Oxford and Paris Building datasets and experiment results show that the proposedmethod achieves competitive performance compared to current state-of-the-art methods.

      • KCI등재

        합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 유사상표 검색 모형 개발

        윤재웅,이석준,송칠용,김연식,정미영,정상일 대한경영정보학회 2019 경영과 정보연구 Vol.38 No.3

        Recently, many companies improving their management performance by building a powerful brand value which is recognized for trademark rights. However, as growing up the size of online commerce market, the infringement of trademark rights is increasing. According to various studies and reports, cases of foreign and domestic companies infringing on their trademark rights are increased. As the manpower and the cost required for the protection of trademark are enormous, small and medium enterprises(SMEs) could not conduct preliminary investigations to protect their trademark rights. Besides, due to the trademark image search service does not exist, many domestic companies have a problem that investigating huge amounts of trademarks manually when conducting preliminary investigations to protect their rights of trademark. Therefore, we develop an intelligent similar trademark search model to reduce the manpower and cost for preliminary investigation. To measure the performance of the model which is developed in this study, test data selected by intellectual property experts was used, and the performance of ResNet V1 101 was the highest. The significance of this study is as follows. The experimental results empirically demonstrate that the image classification algorithm shows high performance not only object recognition but also image retrieval. Since the model that developed in this study was learned through actual trademark image data, it is expected that it can be applied in the real industrial environment. 전 세계적으로 온라인 상거래 시장 규모가 성장함에 따라 국제 및 국내 기업의 상표권이 침해되는 사례가 빈번하게 발생하고 있다. 다양한 연구 및 보고서에 따르면, 해외 기업 또는 개인이 국내 기업의 상표권을 침해한 사례와, 국내 기업 간 발생하는 상표권 분쟁 사례가 증가하고 있는 것으로 나타나고 있으며, 특허청의 보고서에 따르면 기업의 규모가 작을수록 상표보호를 위한 사전 예방활동을 수행하지 않는다고 응답한 비율이 높은 것으로 나타났다. 이러한 문제는 선등록 상표에 대한 사전조사 또는 자사의 상표보호를위해 소요되는 인력과 비용이 원인인 것으로 판단된다. 한편, 국내에서 선등록상표에 대한 사전조사를 위해 상용되는 서비스를 살펴보면 상표 이미지를 활용한검색 서비스를 제공하고 있지 않은 상황이다. 이로 인해 국내 대다수의 기업은 자사의 상표 보호 및 선등록 상표에 대한 사전조사 수행 시 방대한 양의 선등록된 상표를 수작업으로 조사해야하는 문제가 발생한다. 따라서 본 연구에서는 기업의 상표권 보호 및 선등록 상표에 대한 사전조사 수행 시 투입되는 인력 및비용절감과, 국내외에서 발생하고 있는 상표권 침해 문제를 해결하기 위해 합성곱 신경망 기법을 활용한지능형 유사 상표 검색 모델을 개발하고자 한다. 지적 재산권 전문가가 선정한 테스트 데이터를 활용하여지능형 유사 상표 검색 모델의 정확도를 측정한 결과 ResNet V1 101의 성능이 가장 높게 나타났다. 해당결과를 통해 이미지 분류 알고리즘이 단순한 사물 인식 분야뿐만 아니라 이미지 검색 분야에서도 높은 성능을 나타낸다는 것을 실증적으로 입증했으며, 본 연구는 실제 상표 이미지 데이터를 활용했다는 측면에서실제 산업 환경에서 활용성이 높을 것으로 사료된다.

      • KCI등재

        Learning-Based Multiple Pooling Fusion in Multi-View Convolutional Neural Network for 3D Model Classification and Retrieval

        Hui Zeng,Qi Wang,Chen Li,Wei Song 한국정보처리학회 2019 Journal of information processing systems Vol.15 No.5

        We design an ingenious view-pooling method named learning-based multiple pooling fusion (LMPF), andapply it to multi-view convolutional neural network (MVCNN) for 3D model classification or retrieval. By thismeans, multi-view feature maps projected from a 3D model can be compiled as a simple and effective featuredescriptor. The LMPF method fuses the max pooling method and the mean pooling method by learning a setof optimal weights. Compared with the hand-crafted approaches such as max pooling and mean pooling, theLMPF method can decrease the information loss effectively because of its “learning” ability. Experiments onModelNet40 dataset and McGill dataset are presented and the results verify that LMPF can outperform thoseprevious methods to a great extent.

      • SCOPUSKCI등재

        Learning-Based Multiple Pooling Fusion in Multi-View Convolutional Neural Network for 3D Model Classification and Retrieval

        Zeng, Hui,Wang, Qi,Li, Chen,Song, Wei Korea Information Processing Society 2019 Journal of information processing systems Vol.15 No.5

        We design an ingenious view-pooling method named learning-based multiple pooling fusion (LMPF), and apply it to multi-view convolutional neural network (MVCNN) for 3D model classification or retrieval. By this means, multi-view feature maps projected from a 3D model can be compiled as a simple and effective feature descriptor. The LMPF method fuses the max pooling method and the mean pooling method by learning a set of optimal weights. Compared with the hand-crafted approaches such as max pooling and mean pooling, the LMPF method can decrease the information loss effectively because of its "learning" ability. Experiments on ModelNet40 dataset and McGill dataset are presented and the results verify that LMPF can outperform those previous methods to a great extent.

      • SCISCIESCOPUS

        Automatic textile image annotation by predicting emotional concepts from visual features

        Shin, Y.,Kim, Y.,Kim, E.Y. Butterworths ; Elsevier Science Ltd 2010 Image and vision computing Vol.28 No.3

        This paper presents an emotion prediction system that can automatically predict certain human emotional concepts from a given textile. The main application motivating this study is textile image annotation, which has recently rapidly expanded in relation to the Web. In the proposed method, color and pattern are used as cues to predict the emotional semantics associated with an image, where these features are extracted using a color quantization and a multi-level wavelet transform, respectively. The extracted features are then applied to three representative classifiers: K-means clustering, Naive Bayesian, and a multi-layered perceptron (MLP), all of which are widely used in data mining. When evaluating the proposed emotion prediction method using 3600 textile images, the MLP produces the best performance. Thereafter, the proposed MLP-based method is compared with other methods that only use color or pattern, and the proposed method shows the best performance with an accuracy of above 92%. Therefore, the results confirm that the proposed method can be effectively applied to the commercial textile industry and image retrieval.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼