RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      모바일 시스템을 위한 CNN 딥 러닝 가속화 알고리즘

      한글로보기

      https://www.riss.kr/link?id=A105619758

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      A mobile system with limited computing and storage capacity mainly processes the training and inference of deep learning in a data center. Therefore, it is difficult for a mobile system to provide private artificial intelligence services, and users may be reluctant to transfer personal information to data centers. Therefore, this paper proposes a deep learning acceleration algorithm for convolutional neural network where a mobile system enables learning and inference itself. The proposed algorithm efficiently reduces the size of the convolutional neural network by a low-rank approximation method that compacts the information of the neural network into some weights, and a pruning method that removes non-critical weights. Experimental results show that the proposed algorithm achieves the speed of inference 1.65 times faster, requires the number of fine-tune fewer 1.5 times, and reduces the memory capacity for storing weights 2 times less than the conventional prunning algorithm.
      번역하기

      A mobile system with limited computing and storage capacity mainly processes the training and inference of deep learning in a data center. Therefore, it is difficult for a mobile system to provide private artificial intelligence services, and users ma...

      A mobile system with limited computing and storage capacity mainly processes the training and inference of deep learning in a data center. Therefore, it is difficult for a mobile system to provide private artificial intelligence services, and users may be reluctant to transfer personal information to data centers. Therefore, this paper proposes a deep learning acceleration algorithm for convolutional neural network where a mobile system enables learning and inference itself. The proposed algorithm efficiently reduces the size of the convolutional neural network by a low-rank approximation method that compacts the information of the neural network into some weights, and a pruning method that removes non-critical weights. Experimental results show that the proposed algorithm achieves the speed of inference 1.65 times faster, requires the number of fine-tune fewer 1.5 times, and reduces the memory capacity for storing weights 2 times less than the conventional prunning algorithm.

      더보기

      참고문헌 (Reference)

      1 공기호, "피부색과 S-LGP와 U-LGP기반 CNN을 이용한 얼굴 검출 알고리즘 연구" 한국정보기술학회 15 (15): 107-113, 2017

      2 M. Rhu, "vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design" 1-13, 2016

      3 K. Simonyan, "Very deep convolutional networks for large-scale image recognition" 1-14, 2015

      4 D. Jeong, "Trend on Artificial Intelligence Technology and Its Related Industry" 15 (15): 21-28, 2017

      5 S. Anwar, "Structured pruning of deep convolutional neural networks" 13 (13): 12-, 2017

      6 G. Chen, "Smallfootprint keyword spotting using deep neural networks" 4087-4091, 2014

      7 K. Baker, "Singular value decomposition tutorial" 24-, 2005

      8 J. Chung, "Simplifying deep neural networks for neuromorphic architectures" 1-6, 2016

      9 A. Parashar, "Scnn: An accelerator for compressed-sparse convolutional neural networks" 27-40, 2017

      10 J. Yu, "Scalpel: Customizing dnn pruning to the underlying hardware parallelism" 548-560, 2017

      1 공기호, "피부색과 S-LGP와 U-LGP기반 CNN을 이용한 얼굴 검출 알고리즘 연구" 한국정보기술학회 15 (15): 107-113, 2017

      2 M. Rhu, "vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design" 1-13, 2016

      3 K. Simonyan, "Very deep convolutional networks for large-scale image recognition" 1-14, 2015

      4 D. Jeong, "Trend on Artificial Intelligence Technology and Its Related Industry" 15 (15): 21-28, 2017

      5 S. Anwar, "Structured pruning of deep convolutional neural networks" 13 (13): 12-, 2017

      6 G. Chen, "Smallfootprint keyword spotting using deep neural networks" 4087-4091, 2014

      7 K. Baker, "Singular value decomposition tutorial" 24-, 2005

      8 J. Chung, "Simplifying deep neural networks for neuromorphic architectures" 1-6, 2016

      9 A. Parashar, "Scnn: An accelerator for compressed-sparse convolutional neural networks" 27-40, 2017

      10 J. Yu, "Scalpel: Customizing dnn pruning to the underlying hardware parallelism" 548-560, 2017

      11 J. Wu, "Quantized convolutional neural networks for mobile devices" 4820-4828, 2016

      12 G. Poli, "Processing neocognitron of face recognition on high performance environment based on GPU with CUDA architecture" 81-88, 2008

      13 S. Han, "Learning both weights and connections for efficient neural network" 1135-1143, 2015

      14 A. Krizhevsky, "Imagenet classification with deep convolutional neural networks" 1097-1105, 2012

      15 Y. LeCun, "Gradient-based learning applied to document recognition" 86 (86): 2278-2324, 1998

      16 C. Szegedy, "Going deeper with convolutions" 1-9, 2015

      17 J. Qiu, "Going Deeper with Embedded FPGA platform for Convolutional Neural Network" 26-35, 2016

      18 J. Ye, "Generalized low rank approximations of matrices" 61 (61): 167-191, 2005

      19 J. Park, "Faster cnns with direct sparse convolutions and guided pruning" 2016

      20 R. Girshick, "Fast R-CNN" 1440-1448, 2015

      21 S. Han, "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding" 1-14, 2015

      22 Y. D. Kim, "Compression of deep convolutional neural networks for fast and low power mobile applications" 1-16, 2015

      23 Y. Jia, "Caffe: Convolutional architecture for fast feature embedding" 675-678, 2014

      24 T. Roughgarden, "CS168: The Modern Algorithmic Toolbox Lecture# 9: The Singular Value Decomposition (SVD) and Low-Rank Matrix Approximations" 2-7, 2015

      25 R. Collobert, "A unified architecture for natural language processing: Deep neural networks with multitask learning" 160-167, 2008

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2022 평가예정 재인증평가 신청대상 (재인증)
      2019-01-01 평가 등재학술지 유지 (계속평가) KCI등재
      2016-01-01 평가 등재학술지 유지 (계속평가) KCI등재
      2012-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2009-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2008-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2006-01-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.45 0.45 0.39
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.38 0.35 0.566 0.16
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼