RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        딥러닝 신경망을 이용한 신용카드 부도위험 예측의 효용성 분석

        윤종문 한국금융학회 2019 금융연구 Vol.33 No.1

        This study aims to discuss the usefulness of the deep learning neural network and the possibility of the deep learning neural network analysis in judging credit information by using credit card default data. Deep learning neural network analysis in the financial sector excluding the current stock price prediction model is under limited research. It is mainly used for upgrading models of the credit rating (Kvamme et al., 2016, 2018; Tran, 2016; Luo, 2017) and the delinquency rate (Sirignano et al., 2018). In the credit card market, it is focused on credit card issuance and fraud detection model (Ramanathan, 2014, Niimi, 2015). As mentioned earlier, there has not been much analysis of deep learning neural network using financial market data. This is because the study of deep learning neural networks is actively carried out mainly in the field of computer science such as image, speech recognition, natural language processing. Additionally, Researchers in the financial sector have difficulty learning deep learning algorithms and setting up a computer runtime environment. It is also difficult to apply the algorithm to financial data due to lower dimension than the image. Nowadays, financial companies have been interested in machine learning and are increasing their recruitment, but it is still in the stage of verifying the possibility of deep learning neural network. Therefore, This study examines the possibility of improving the accuracy of credit card default risk prediction by using a deep learning neural network algorithm. To do this, we use existing machine learning algorithms (Logistic, SVM, Random Forest, Lasso, etc.) as a comparison index for performance check of deep learning neural network analysis. Firstly, the deep learning neural network is constructed with two hidden layers and five neurons, and derives the prediction accuracy according to the activation function and the initial value setting method. There are Sigmoid, ReLU, tanh and Maxout as active functions, and random value, Xavier, RBM, He’s as initialization methods. Based on this, we compare the accuracy of existing machine learning algorithms. As a result, the deep learning neural network analysis showed performance improvement between 0.6% and 6.6%p compared to the existing machine learning algorithms (Logistic, SVM, Random Forest, Lasso, etc.). Among these results, the active function and the initial value setting method with the highest prediction accuracy are ReLU (rectified linear units) and Xavier initialization. However, there is no significant improvement in performance with increasing number of hidden layers and neurons up to 10 and 25, respectively. Also, the dropout and CNN (convolution neural network) models, which showed high performance in the field of image identification, showed no significant difference in prediction accuracy. Nevertheless, it could be interpreted that the increase of hidden layers can improve the accuracy of estimation because the highest accuracy (0.8161) and the AUC (0.7726) are observed for 10 hidden nodes and 15 neurons. However, we can’t say that accuracy increases linearly by the number of hidden layers and neurons. These limitation could be due to the quantitative and qualitative limitations of the credit card data used here. We did not use recurrent neural network (RNN) and long-short term memory (LSTM) models since the personal default data for credit card used in this study is cross-sectional data. These method are for Time-Series data. Therefore, it is expected that it will be able to obtain better results in identification problems (credit rating, delinquency rate, interest rate calculation) of present various financial markets if these deep learning neural network methodologies are applied through big data including time series data. This study can be turned into a question of how deep learning analysis can lower the default risk and delinquency rate by using financial data from a practical point of ... 본 연구는 국내․외 금융시장에서 아직 활성화되지 못한 딥러닝 신경망(deep learning neural network) 알고리즘을 이용해 신용카드 부도위험 예측의 정확도 향상 가능성에 대해서 점검한다. 이를 위해 기존 머신러닝 알고리즘(Logistic, SVM, Random Forest, Lasso 등)을 딥러닝 신경망 분석의 성능 점검을 위한 비교 지표로 활용한다. 우선, 딥러닝 신경망은 두 개의 은닉층(hidden layers)과 다섯 개의 뉴런(neuron)으로 구축하고, 활성함수(activation function)와 초기값(initial value) 설정방법에 따른 예측정확도를 도출한다. 그 결과 딥러닝 신경망 분석이 기존 머신러닝 알고리즘 보다 최소 0.6%p에서 최대 6.6%p 성능이 향상된 것으로 나타났다. 이 중 가장 높은 예측 정확도를 보인 활성함수와 초기값 설정방식은 ReLU(rectified linear units)와 Xavier(2010)이고 이를 기준으로 은닉층과 뉴런의 수를 각각 최대 10개와 25개까지 늘려 분석한 결과에서도 유사한 결과가 나타났다. 다만, 기존 연구에서와 같이 은닉층과 뉴런의 수의 증가에 따른 뚜렷한 성능의 향상은 나타나지 않았다. 또한, 이미지 식별 분야에서 높은 성능을 보였던 Dropout과 CNN(convolution neural network) 모델도 예측 정확도에서 큰 차이를 보이지 않았다. 이는 여기에서 사용된 신용카드 데이터가 다수 픽셀(pixel)로 이루어진 이미지 데이터와 비교해 양적․질적 한계가 있기 때문으로 판단된다. 한편, 본 연구에서 사용된 개인의 신용카드 부도 데이터는 횡단면 자료이기 때문에 시계열 데이터에서 높은 성능을 나타내는 RNN(recurrent neural network) 및 LSTM(Long- Short Term Memory) 등의 딥러닝 신경망 알고리즘을 사용하지는 않았다. 따라서 추후 시계열 자료가 포함된 빅데이터를 통해 이들 딥러닝 신경망 방법론을 적용한다면, 현재의 다양한 금융시장의 식별문제(신용등급, 연체율, 금리산정)에 있어 보다 향상된 결과를 도출할 수 있을 것으로 기대된다.

      • KCI등재

        Network Traffic Classification Based on Deep Learning

        ( Junwei Li ),( Zhisong Pan ) 한국인터넷정보학회 2020 KSII Transactions on Internet and Information Syst Vol.14 No.11

        As the network goes deep into all aspects of people's lives, the number and the complexity of network traffic is increasing, and traffic classification becomes more and more important. How to classify them effectively is an important prerequisite for network management and planning, and ensuring network security. With the continuous development of deep learning, more and more traffic classification begins to use it as the main method, which achieves better results than traditional classification methods. In this paper, we provide a comprehensive review of network traffic classification based on deep learning. Firstly, we introduce the research background and progress of network traffic classification. Then, we summarize and compare traffic classification based on deep learning such as stack autoencoder, one-dimensional convolution neural network, two-dimensional convolution neural network, three-dimensional convolution neural network, long short-term memory network and Deep Belief Networks. In addition, we compare traffic classification based on deep learning with other methods such as based on port number, deep packets detection and machine learning. Finally, the future research directions of network traffic classification based on deep learning are prospected.

      • KCI등재

        딥러닝의 모형과 응용사례

        안성만(Ahn, SungMahn) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.2

        Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for “backward propagation of errors” and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer’s) neurons. Shared weights mean that we’re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren’t just propagated backward through layers, they’re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when traini

      • KCI등재

        Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑

        공성현,백원경,정형섭,Gong, Sung-Hyun,Baek, Won-Kyung,Jung, Hyung-Sup 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.6

        Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

      • Application of deep neural network in diagnosis of subcutaneous mass with ultrasound image: a pilot study

        ( Hwa Jung Yook ),( Woo Hyup Lee ),( Joon Ho Son ),( Ju Hee Han ),( Ji Hyun Lee ),( Jun Young Lee ),( Young Min Park ),( Chul Hwan Bang ) 대한피부과학회 2020 대한피부과학회 학술발표대회집 Vol.72 No.1

        Background: Ultrasonographic imaging is an efficient tool for diagnosing subcutaneous mass without invasive procedure. However, there are few studies which applied deep neural network to skin ultrasound images. Objectives: The aim of this study was to evaluate the accuracy of deep neural network in diagnosing epidermal cyst, lipoma and other subcutaneous masses. Methods: We created a dataset of 1317 skin ultrasound images from 198 patients who diagnosed as epidermal cyst, lipoma and the others for learning deep neural network. Performance was evaluated using another set of 95 images (26 ultrasound images of epidermal cysts, 27 of lipoma and 44 of the others) from published articles. Results: The overall accuracy of deep neural network was 86.3%. The accuracy in diagnosis of epidermal cyst, lipoma and the others were 95.83%, 66.67% and 95.35%, respectively. As a result of analysis with GradCam, the deep neural network detected posterior enhancement, hypoechoic feature, and well-defined margin in epidermal cyst. Conclusion: We showed that deep neural network combined with ultrasound might be helpful for clinicians diagnosing subcutaneous masses.

      • KCI등재

        스파이킹 신경망 추론을 위한 심층 신경망 가중치 변환

        이정수,허준영 (사)한국스마트미디어학회 2022 스마트미디어저널 Vol.11 No.3

        Spiking neural network is a neural network that applies the working principle of real brain neurons. Due to the biological mechanism of neurons, it consumes less power for training and reasoning than conventional neural networks. Recently, as deep learning models become huge and operating costs increase exponentially, the spiking neural network is attracting attention as a third-generation neural network that connects convolution neural networks and recurrent neural networks, and related research is being actively conducted. However, in order to apply the spiking neural network model to the industry, a lot of research still needs to be done, and the problem of model retraining to apply a new model must also be solved. In this paper, we propose a method to minimize the cost of model retraining by extracting the weights of the existing trained deep learning model and converting them into the weights of the spiking neural network model. In addition, it was found that weight conversion worked correctly by comparing the results of inference using the converted weights with the results of the existing model. 스파이킹 신경망은 실제 두뇌 뉴런의 작동원리를 적용한 신경망으로, 뉴런의 생물학적 메커니즘으로 인해 기존 신경망보다 학습과 추론에 소모되는 전력이 적다. 최근 딥러닝 모델이 거대해지며 운용에 소모되는 비용 또한 기하급수적으로 증가함에 따라 스파이킹 신경망은 합성곱, 순환 신경망을 잇는 3세대 신경망으로 주목받으며 관련 연구가 활발히 진행되고 있다. 그러나 스파이킹 신경망 모델을 산업에 적용하기 위해서는 아직 선행되어야 할 연구가 많이 남아있고, 새로운 모델을 적용하기 위한 모델 재학습 문제 역시 해결해야 한다. 본 논문에서는 기존의 학습된 딥러닝 모델의 가중치를 추출하여 스파이킹 신경망 모델의 가중치로 변환하는 것으로 모델 재학습 비용을 최소화하는 방법을 제안한다. 또한, 변환된 가중치를 사용한 추론 결과와 기존 모델의 결과를 비교해 가중치 변환이 올바르게 작동함을 보인다.

      • KCI등재

        ONNX기반 스파이킹 심층 신경망 변환 도구

        박상민,허준영 한국인터넷방송통신학회 2020 한국인터넷방송통신학회 논문지 Vol.20 No.2

        스파이킹 신경망은 기존 신경망과 다른 메커니즘으로 동작한다. 기존 신경망은 신경망을 구성하는 뉴런으로 들어 오는 입력 값에 대해 생물학적 메커니즘을 고려하지 않은 활성화 함수를 거쳐 다음 뉴런으로 출력 값을 전달한다. 뿐만 아니라 VGGNet, ResNet, SSD, YOLO와 같은 심층 구조를 사용한 좋은 성과들이 있었다. 반면 스파이킹 신경망은 기존 활성화함수 보다 실제 뉴런의 생물학적 메커니즘과 유사하게 동작하는 방식이지만 스파이킹 뉴런을 사용한 심층 구조에 대한 연구는 기존 뉴런을 사용한 심층 신경망과 비교해 활발히 진행되지 않았다. 본 논문은 기존 뉴런으로 만들어 진 심층 신경망 모델을 변환 툴에 로드하여 기존 뉴런을 스파이킹 뉴런으로 대체하여 스파이킹 심층 신경망으로 변환하 는 방법에 대해 제안한다. The spiking neural network operates in a different mechanism than the existing neural network. The existing neural network transfers the output value to the next neuron via an activation function that does not take into account the biological mechanism for the input value to the neuron that makes up the neural network. In addition, there have been good results using deep structures such as VGGNet, ResNet, SSD and YOLO. spiking neural networks, on the other hand, operate more like the biological mechanism of real neurons than the existing activation function, but studies of deep structures using spiking neurons have not been actively conducted compared to in-depth neural networks using conventional neurons. This paper proposes the method of loading an deep neural network model made from existing neurons into a conversion tool and converting it into a spiking deep neural network through the method of replacing an existing neuron with a spiking neuron.

      • KCI등재

        An evaluation methodology for 3D deep neural networks using visualization in 3D data classification

        황현태,이수홍,Hyung Gun Chi,강남규,공현배,Jiaqi Lu,옥형석 대한기계학회 2019 JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY Vol.33 No.3

        "Making 3D deep neural networks debuggable". In the study, we develop and propose a 3D deep neural network visualization methodology for performance evaluation of 3D deep neural networks. Our research was conducted using a 3D deep neural network model, which shows the best performance. The visualization method of the research is a method of visualizing part of the 3D object by analyzing the naive Bayesian 3D complement instance generation method and the prediction difference of each feature. The method emphasizes the influence of the network in the process of making decisions. The result of visualization through the algorithm of the study shows a clear difference based on the result class and the instance within the class, and the authors can obtain insight that can evaluate and improve the performance of the DNN (deep neural networks) model by the analyzed results. 3D deep neural networks can be made "indirectly debuggable", and after the completion of the visualization method and the analysis of the result, the method can be used as the evaluation method of "general non-debuggable DNN" and as a debugging method.

      • KCI등재

        심층 신경회로망 앙상블을 이용한 걸음걸이 인식에 대한 연구

        홍성준,이수형,이희성 대한전기학회 2020 전기학회논문지 Vol.69 No.7

        The recognition of a person from his (her) gait has been a recent focus in computer vision because of its unique advantages such as non-invasive and human friendly. Gait recognition, however, has the weakness that it is not reliable compared with other biometrics. In this paper, we applied deep neural network ensemble to the gait recognition problem. The deep neural network ensemble is a learning paradigm where a collection of deep neural networks is trained for the same task. Generally, the ensemble shows better generalization performance than a single deep neural network such as convolution neural network and recurrent neural network. To increase reliability of the gait recognition, gait energy image (GEI) and Motion silhouette image (MSI) are extracted for gait features and convolution and recurrent neural network ensemble are used for classifier. Experiments are performed with the NLPR and SOTON databases to show the efficiency of the proposed algorithm. The performance of proposed method is 4.55%, 4.85%, 2.5% and 2.43% better than single CNN, respectively in two databases. As a result we can create a recognition system with accuracy of 100%, 100%, and 94% in the NLPR database and 97.35% in the SOTON database.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼