RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기
      • 무료
      • 기관 내 무료
      • 유료
      • SCIESCOPUSKCI등재

        Selecting the Optimal Hidden Layer of Extreme Learning Machine Using Multiple Kernel Learning

        ( Wentao Zhao Pan Li ),( Qiang Liu Dan Liu ),( Xinwang Liu ) 한국인터넷정보학회 2018 KSII Transactions on Internet and Information Syst Vol.12 No.12

        Extreme learning machine (ELM) is emerging as a powerful machine learning method in a variety of application scenarios due to its promising advantages of high accuracy, fast learning speed and easy of implementation. However, how to select the optimal hidden layer of ELM is still an open question in the ELM community. Basically, the number of hidden layer nodes is a sensitive hyperparameter that significantly affects the performance of ELM. To address this challenging problem, we propose to adopt multiple kernel learning (MKL) to design a multi-hidden-layer-kernel ELM (MHLK-ELM). Specifically, we first integrate kernel functions with random feature mapping of ELM to design a hidden-layer-kernel ELM (HLK-ELM), which serves as the base of MHLK-ELM. Then, we utilize the MKL method to propose two versions of MHLK-ELMs, called sparse and non-sparse MHLK-ELMs. Both two types of MHLK-ELMs can effectively find out the optimal linear combination of multiple HLK-ELMs for different classification and regression problems. Experimental results on seven data sets, among which three data sets are relevant to classification and four ones are relevant to regression, demonstrate that the proposed MHLK-ELM achieves superior performance compared with conventional ELM and basic HLK-ELM.

      • KCI등재

        Deep LS-SVM for regression

        황창하,심주용 한국데이터정보과학회 2016 한국데이터정보과학회지 Vol.27 No.3

        In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS- SVM significantly outperforms state of the art machine learning methods on regression problems.

      • KCI우수등재

        Deep LS-SVM for regression

        Hwang, Changha,Shim, Jooyong The Korean Data and Information Science Society 2016 한국데이터정보과학회지 Vol.27 No.3

        In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

      • KCI우수등재

        Deep LS-SVM for regression

        Changha Hwang,Jooyong Shim 한국데이터정보과학회 2016 한국데이터정보과학회지 Vol.27 No.3

        In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS- SVM significantly outperforms state of the art machine learning methods on regression problems.

      • KCI등재

        딥러닝을 이용한 트러스 구조물의 정적 및 동적 거동 예측

        심은아,이승혜,이재홍 한국공간구조학회 2018 한국공간구조학회지 Vol.18 No.4

        In this study, an algorithm applying deep learning to the truss structures was proposed. Deep learning is a method of raising the accuracy of machine learning by creating a neural networks in a computer. Neural networks consist of input layers, hidden layers and output layers. Numerous studies have focused on the introduction of neural networks and performed under limited examples and conditions, but this study focused on two- and three-dimensional truss structures to prove the effectiveness of algorithms. and the training phase was divided into training model based on the dataset size and epochs. At these case, a specific data value was selected and the error rate was shown by comparing the actual data value with the predicted value, and the error rate decreases as the data set and the number of hidden layers increases. In consequence, it showed that it is possible to predict the result quickly and accurately without using a numerical analysis program when applying the deep learning technique to the field of structural analysis

      • KCI등재

        CNN 은닉층 증가에 따른 인공지능 정확도 평가: 뇌출혈 CT 데이터

        김한준,강민지,김은지,나용현,박재희,백수은,심수만,홍주완 한국방사선학회 2022 한국방사선학회 논문지 Vol.16 No.1

        Deep learning is a collection of algorithms that enable learning by summarizing the key contents of large amounts of data; it is being developed to diagnose lesions in the medical imaging field. To evaluate the accuracy of the cerebral hemorrhage diagnosis, we used a convolutional neural network (CNN) to derive the diagnostic accuracy of cerebral parenchyma computed tomography (CT) images and the cerebral parenchyma CT images of areas where cerebral hemorrhages are suspected of having occurred. We compared the accuracy of CNN with different numbers of hidden layers and discovered that CNN with more hidden layers resulted in higher accuracy. The analysis results of the derived CT images used in this study to determine the presence of cerebral hemorrhages are expected to be used as foundation data in studies related to the application of artificial intelligence in the medical imaging industry. 딥러닝은 다량의 데이터 속에서 핵심적인 내용을 요약해 학습하는 알고리즘의 집합으로 의료영상 분야에서 병변을 진단하는 목적으로 사용되기 위해 발전하고 있다. 본 논문에서는 뇌출혈 진단 정확성을 평가하기 위해 CNN을 이용해 뇌실질 CT 영상과 뇌출혈이 의심되는 뇌실질 CT의 진단 정확도를 도출하였다. 은닉층 수에 따른 정확도를 비교한 결과 은닉층이 증가할수록 정확도가 높아졌다. 본 연구에서 도출된 CT 뇌출혈 유무 분석 결과는 앞으로 의료영상 분야와 인공지능 접목에 관한 연구에서 기초 자료로 사용될 것으로 사료된다.

      • An Adaptive Multi-Layer Block Data-Hiding Algorithm that uses Edge Areas of Gray-Scale Images

        Tuan Duc Nguyen,Somjit Arch-int,Ngamnij Arch-int 보안공학연구지원센터 2015 International Journal of Security and Its Applicat Vol.9 No.6

        Embedding data into smooth regions introduces stego-images with poor security and visual quality. Edge adaptive steganography, in which the flat regions are not employed to carry a message at low embedding rates, was proposed. However, for the high embedding rates, smooth regions are contaminated to hide a secret message. In this paper, we present an adaptive multi-layer block data-hiding (MBDH) algorithm, in which the embedding regions are adaptively selected according to the number of the secret message bits and the texture characteristic of a cover-image. Via employing the MBDH algorithm, more secret message bits are embedded into the sharp regions. Therefore, the smooth regions are not used, even at high embedding rates. Furthermore, most of edge adaptive steganography algorithms have a limited capacity when the smooth regions are not employed in data hiding. The proposed scheme solves this issue when it can embed more secret bits into the selected regions while the perceptual quality of stego-images is still maintained. The experimental results were evaluated on 10,000 natural gray-scale images. The visual attack, targeted steganalysis, and universal steganalysis are employed to examine the performance of the proposed scheme. The results show that the new scheme significantly overcomes the previous edge-based approaches and least significant bit (LSB) based methods in term of security and visual quality.

      • KCI등재

        Neural Networks Based Modeling with Adaptive Selection of Hidden Layer's Node for Path Loss Model

        강창호,조성윤 사단법인 항법시스템학회 2019 Journal of Positioning, Navigation, and Timing Vol.8 No.4

        The auto-encoder network which is a good candidate to handle the modeling of the signal strength attenuation is designed for denoising and compensating the distortion of the received data. It provides a non-linear mapping function by iteratively learning the encoder and the decoder. The encoder is the non-linear mapping function, and the decoder demands accurate data reconstruction from the representation generated by the encoder. In addition, the adaptive network width which supports the automatic generation of new hidden nodes and pruning of inconsequential nodes is also implemented in the proposed algorithm for increasing the efficiency of the algorithm. Simulation results show that the proposed method can improve the neural network training surface to achieve the highest possible accuracy of the signal modeling compared with the conventional modeling method.

      • KCI우수등재

        시계열 심층학습 모델의 은닉 노드에 대한 시각화

        조소희,최재식 한국정보과학회 2020 정보과학회논문지 Vol.47 No.5

        Globally, the use of artificial intelligence (AI) applications has increased in a variety of industries from manufacturing, to health care to the financial sector. As a result, there is a growing interest in explainable artificial intelligence (XAI), which can provide explanations of what happens inside AI. Unlike previous work using image data, we visualize hidden nodes for a time series. To interpret which patterns of a node make more effective model decisions, we propose a method of arranging nodes in a hidden layer. The hidden nodes sorted by weight matrix values show which patterns significantly affected the classification. Visualizing hidden nodes explains a process inside the deep learning model, as well as enables the users to improve their understanding of time series data. 산업, 의료, 금융 등 다양한 분야에서 인공지능을 활용한 예측 및 진단이 늘어나면서, 인공지능의 내부 작동원리를 설명하는 연구에도 관심이 높아지고 있다. 이미지 데이터에서 중요 입력 특징점을 시각화하는 기존 연구들과 다르게, 본 논문에서는 시계열 데이터의 은닉 노드를 시각화하여 심층신경망 내부의 작동원리를 설명한다. 본 논문은 은닉 노드의 시각화를 쉽게 하도록 가중치 행렬(weight matrix)을 기준으로 은닉 노드를 군집화하여 패턴을 파악하였다. 이를 통해 심층학습 모델의 작동원리를 설명할 뿐만 아니라, 사용자 수준에서 시계열 데이터에 대한 이해를 높일 수 있었다.

      • KCI우수등재

        Visualization of Convolutional Neural Networks for Time Series Input Data

        Sohee Cho(조소희),Jaesik Choi(최재식) Korean Institute of Information Scientists and Eng 2020 정보과학회논문지 Vol.47 No.5

        Globally, the use of artificial intelligence (AI) applications has increased in a variety of industries from manufacturing, to health care to the financial sector. As a result, there is a growing interest in explainable artificial intelligence (XAI), which can provide explanations of what happens inside AI. Unlike previous work using image data, we visualize hidden nodes for a time series. To interpret which patterns of a node make more effective model decisions, we propose a method of arranging nodes in a hidden layer. The hidden nodes sorted by weight matrix values show which patterns significantly affected the classification. Visualizing hidden nodes explains a process inside the deep learning model, as well as enables the users to improve their understanding of time series data.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼