RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      신경망 근사에 의한 다중 레이어의 클래스 활성화 맵을 이용한 블랙박스 모델의 시각적 설명 기법 = Visual Explanation of Black-box Models Using Layer-wise Class Activation Maps from Approximating Neural Networks

      한글로보기

      https://www.riss.kr/link?id=A107846752

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information of the black box model will be transferred to a white box model that we aim to create through the KD. The white box model will learn the representation of the black-box model. Second, the white-box model generates attention maps for each of its layers using Grad-CAM. Then we combine the attention maps of different layers using the pixel-wise summation to generate a final saliency map that contains information from all layers of the model. The experiments show that the proposed technique found important layers and explained which part of the input is important. Saliency maps generated by the proposed technique performed better than those of Grad-CAM in deletion game.
      번역하기

      In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information o...

      In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information of the black box model will be transferred to a white box model that we aim to create through the KD. The white box model will learn the representation of the black-box model. Second, the white-box model generates attention maps for each of its layers using Grad-CAM. Then we combine the attention maps of different layers using the pixel-wise summation to generate a final saliency map that contains information from all layers of the model. The experiments show that the proposed technique found important layers and explained which part of the input is important. Saliency maps generated by the proposed technique performed better than those of Grad-CAM in deletion game.

      더보기

      참고문헌 (Reference)

      1 M. T. Ribeiro, "Why Should I trust you? : Explaining the Predictions of any Classifier" 1135-1144, 2016

      2 C. Wah, "The Caltech-ucsd Birds-200-2011 dataset" 2011

      3 J. Hu, "Squeeze-and-excitation Networks" 7132-7141, 2018

      4 L. Edwards, "Slave to the Algorithm: Why a Right to an Explanation is Probably not the Remedy you are Looking for" 16 (16): 2017

      5 V. Petsiuk, "Rise : Randomized Input Sampling for Explanation of black-box models"

      6 F. Wang, "Residual Attention Network for Image Classification" 3156-3164, 2017

      7 X. Wang, "Non-local Neural Networks" 7794-7803, 2018

      8 B. Zhou, "Learning Deep Features for Discriminative Localization" 2921-2929, 2016

      9 C. Fong, "Interpretable Explanations of Black Boxes by Meaningful Perturbation" 3429-3437, 2017

      10 J. Deng, "Imagenet : A Large-scale Hierarchical Image Database" 248-255, 2009

      1 M. T. Ribeiro, "Why Should I trust you? : Explaining the Predictions of any Classifier" 1135-1144, 2016

      2 C. Wah, "The Caltech-ucsd Birds-200-2011 dataset" 2011

      3 J. Hu, "Squeeze-and-excitation Networks" 7132-7141, 2018

      4 L. Edwards, "Slave to the Algorithm: Why a Right to an Explanation is Probably not the Remedy you are Looking for" 16 (16): 2017

      5 V. Petsiuk, "Rise : Randomized Input Sampling for Explanation of black-box models"

      6 F. Wang, "Residual Attention Network for Image Classification" 3156-3164, 2017

      7 X. Wang, "Non-local Neural Networks" 7794-7803, 2018

      8 B. Zhou, "Learning Deep Features for Discriminative Localization" 2921-2929, 2016

      9 C. Fong, "Interpretable Explanations of Black Boxes by Meaningful Perturbation" 3429-3437, 2017

      10 J. Deng, "Imagenet : A Large-scale Hierarchical Image Database" 248-255, 2009

      11 R. R. Selvaraju, "Grad-cam : Visual Explanations from Deep Networks via Gradient-based Localization" 618-626, 2017

      12 S. Maji, "Fine-grained Visual Classification of Aircraft"

      13 G. Hinton, "Distilling the Knowledge in a Neural Network"

      14 Sample, Ian, "Computer says no: Why Mmaking AIs fair, Accountable and Transparent is Crucial"

      15 S. H. Woo, "Cbam : Convolutional Block Attention Module" 3-19, 2018

      16 A. Vaswani, "Attention is all you Need" 5998-6008, 2017

      17 A. Holzinger, "A Glass-box Interactive Machine Learning Approach for Solving NP-hard Problems with the Human-in-the-loop"

      18 J. Krause, "3d Object Representations for Fine-grained Categorization" 554-561, 2013

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2028 평가예정 재인증평가 신청대상 (재인증)
      2022-01-01 평가 등재학술지 유지 (재인증) KCI등재
      2019-01-01 평가 등재학술지 유지 (계속평가) KCI등재
      2016-01-01 평가 등재학술지 유지 (계속평가) KCI등재
      2014-07-03 학술지명변경 외국어명 : Journal of IEMEK -> IEMEK Journal of Embedded Systems and Applications KCI등재
      2012-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2011-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2009-01-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.27 0.27 0.22
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.22 0.18 0.415 0.07
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼