<P><B>Abstract</B></P> <P>Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressive...
http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
https://www.riss.kr/link?id=A107509665
2017
-
SCI,SCIE,SCOPUS
학술저널
211-222(12쪽)
0
상세조회0
다운로드다국어 초록 (Multilingual Abstract)
<P><B>Abstract</B></P> <P>Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressive...
<P><B>Abstract</B></P> <P>Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.</P> <P><B>Highlights</B></P> <P> <UL> <LI> A novel method to explain nonlinear classification decisions in terms of input variables is introduced. </LI> <LI> The method is based on Taylor expansions and decomposes the output of a deep neural network in terms of input variables. </LI> <LI> The resulting deep Taylor decomposition can be applied directly to existing neural networks without retraining. </LI> <LI> The method is tested on two large-scale neural networks for image classification: BVLC CaffeNet and GoogleNet. </LI> </UL> </P>