RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        딥러닝의 모형과 응용사례

        안성만(Ahn, SungMahn) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.2

        Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for “backward propagation of errors” and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer’s) neurons. Shared weights mean that we’re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren’t just propagated backward through layers, they’re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when traini

      • KCI등재

        그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색

        최수연,박종열 국제문화기술진흥원 2023 The Journal of the Convergence on Culture Technolo Vol.9 No.1

        This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search. 본 논문은 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 모델 설계를 제안한다. 딥 러닝은 블랙박스로 학습이 진행되는 특성으로 인해 설계한 모델이 최적화된 성능을 가지는 구조인지 검증하지 못하는 문제점이 존재한다. 신경망 구조 탐색 모델은 모델을 생성하는 순환 신경망과 생성된 네트워크인 합성곱 신경망으로 구성되어있다. 통상의 신경망 구조 탐색 모델은 순환신경망 계열을 사용하지만 우리는 본 논문에서 순환신경망 대신 그래프 합성곱 신경망을 사용하여 합성곱 신경망 모델을 생성하는 GC-NAS를 제안한다. 제안하는 GC-NAS는 Layer Extraction Block을 이용하여 Depth를 탐색하며 Hyper Parameter Prediction Block을 이용하여 Depth 정보를 기반으로 한 spatial, temporal 정보(hyper parameter)를 병렬적으로 탐색합니다. 따라서 Depth 정보를 반영하기 때문에 탐색 영역이 더 넓으며 Depth 정보와 병렬적 탐색을 진행함으로 모델의 탐색 영역의 목적성이 분명하기 때문에 GC-NAS대비 이론적 구조에 있어서 우위에 있다고 판단된다. GC-NAS는 그래프 합성곱 신경망 블록 및 그래프 생성 알고리즘을 통하여 기존 신경망 구조 탐색 모델에서 순환 신경망이 가지는 고차원 시간 축의 문제와 공간적 탐색의 범위 문제를 해결할 것으로 기대한다. 또한 우리는 본 논문이 제안하는 GC-NAS를 통하여 신경망 구조 탐색에 그래프 합성곱 신경망을 적용하는 연구가 활발히 이루어질 수 있는 계기가 될 수 있기를 기대한다.

      • Recurrent Neural Networks를 활용한 Baltic Dry Index (BDI) 예측

        한민수(Min Soo Han),유성진(Song Jin Yu) 한국항해항만학회 2017 한국항해항만학회 학술대회논문집 Vol.2017 No.추계

        장기 해운불황에 따라 불확실성이 증폭되고 있는 상황에서 경기추세의 이해뿐만 아니라 예측 또한 중요해지고 있는 실정이다. 본 논문에서는 최근 특정 복잡한 문제에 대해서 각광받고 있는 인공신경망을 적용하여 BDI 예측을 연구하였다. 사용된 인공신경망은 순환신경망으로 RNN과 LSTM 그리고 비교의 목적으로 MLP를 통해 2009.04.01.부터 2017.07.31.의 기간을 대상으로 연구를 진행하였다. 또한 전통적 시계열 예측방법론인 ARIMA 분석을 실시해 인공신경망들의 예측성능과 비교하였다. 결과로 순환신경망인 RNN의 성능이 가장 뛰어났으며 LSTM의 특정 시계열(BDI)에의 적용가능성을 확인할 수 있었다. Not only growth of importance to understanding economic trends, but also the prediction to overcome the uncertainty is coming up for long-term maritime recession. This paper discussed about the prediction of BDI with artificial neural networks (ANN). ANN is one of emerging applications that can be the finest solution to the knotty problems that may not easy to achieve by humankind. Proposed a prediction by implementing neural networks that have recurrent architecture which are a Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). And for the reason of comparison, trained Multi Layer Perceptron (MLP) from 2009.04.01 to 2017.07.31. Also made a comparison with conventional statistics, prediction tools; ARIMA. As a result, recurrent net, especially RNN outperformed and also could discover the applicability of LSTM to specific time-series (BDI).

      • KCI등재

        감정평가에 기반한 환경과 행동패턴 학습을 위한 궤환 모듈라 네트워크

        김성주(Seong-Joo Kim),최우경(Woo-Kyung Choi),김용민(Yong-Min Kim),전홍태(Hong-Tae Jeon) 한국지능시스템학회 2004 한국지능시스템학회논문지 Vol.14 No.1

        감정은 지능을 지닌 존재의 이성판단에 영향을 준다. 그래서 주변 환경정보에 의해 평가된 기본적이고 보편적인 감정을 로봇에 가미하면 더욱 인간과 가까운 지능 로봇이 될 것이다. 그러나 인간의 감정을 학습하기 위해서는 다양한 감각정보의 학습과 패턴 분류가 선행되어야 하고 이를 위해서 적합한 네트워크 구조가 요구된다. 신경망은 시스템의 특징을 추출하는데 매우 우수한 능력을 발휘하고 있다. 그러나 일시적 혼선현상과 지역 최소치에 수렴하는 단점이 있다. 그래서 복잡한 문제를 단순한 여러 개의 부분적인 문제로 나누어 해결하는 모듈라 설계방법이 관심의 대상이 되고 있다. 본 논문에서는 수많은 감정평가와 학습 데이터 패턴들을 학습하기 위해서 재결합과 재구성에 탁월한 성능을 지닌 Jacobs와 Jordan이 제안한 모듈라 네트워크와 상황의 재 표현이 가능하고 예측값과 모델링에 적합한 특징을 지닌 궤환 신경망을 결합하였다. 구성된 구조는 기존의 모듈라 네트워크의 학습결과와 비교 검토하였다. Rational sense is affected by emotion. If we add the factor of estimated emotion by environment information into robots, we may get more intelligent and human-friendly robots. However, various sensory information and pattern classification are prescribed for robots to learn emotion so that the networks are suitable for the necessity of robots. Neural network has superior ability to extract character of system but neural network has defect of temporal cross talk and local minimum convergence. To solve the defects, many kinds of modular neural networks have been proposed because they divide a complex problem into simple several sub-problems. The modular neural network, introduced by Jacobs and Jordan, shows an excellent ability of re-composition and re-combination of complex work. On the other hand, the recurrent network acquires state representations and representations of state make the recurrent neural network suitable for diverse applications such as nonlinear prediction and modeling. In this paper, we applied recurrent network for the expert network in the modular neural network structure to learn data pattern based on emotional assessment. To show the performance of the proposed network, simulation of learning the environment and behavior pattern is proceeded with the real time implementation. The given problem is very complex and has too many cases to learn. The result will show the performance and good ability of the proposed network and will be compared with the result of other method, general modular neural network.

      • KCI등재

        축적 컴퓨팅을 위한 멤리스터 소자의 최적화

        이종환,박경우,심현진,오호빈 한국반도체디스플레이기술학회 2024 반도체디스플레이기술학회지 Vol.23 No.1

        Recently, artificial neural networks have been playing a crucial role and advancing across various fields. Artificial neural networks are typically categorized into feedforward neural networks and recurrent neural networks. However, feedforward neural networks are primarily used for processing static spatial patterns such as image recognition and object detection. They are not suitable for handling temporal signals. Recurrent neural networks, on the other hand, face the challenges of complex training procedures and requiring significant computational power. In this paper, we propose memristors suitable for an advanced form of recurrent neural networks called reservoir computing systems, utilizing a mask processor. Using the characteristic equations of Ti/TiOx/TaOy/Pt, Pt/TiOx/Pt, and Ag/ZnO-NW/Pt memristors, we generated current-voltage curves to verify their memristive behavior through the confirmation of hysteresis. Subsequently, we trained and inferred reservoir computing systems using these memristors with the NIST TI-46 database. Among these systems, the accuracy of the reservoir computing system based on Ti/TiOx/TaOy/Pt memristors reached 99%, confirming the Ti/TiOx/TaOy/Pt memristor structure's suitability for inferring speech recognition tasks.

      • KCI등재

        3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발

        이상헌,정동규,유재석 한국음향학회 2023 韓國音響學會誌 Vol.42 No.4

        In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network’s success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method. 다양한 신호가 혼합된 수중 신호로부터 각각의 신호를 분리하는 기술은 오랫동안 연구되어왔지만, 낮은 품질의 수중 신호의 특성 상 쉽게 해결되지 않는 문제이다. 현재 주로 사용되는 방법은 Short-time Fourier transform을 사용하여 수신된 음향신호의 스펙트로그램을 얻은 뒤, 주파수의 특성을 분석하여 신호를 분리하는 기술이다. 하지만 매개변수의 최적화가 까다롭고, 스펙트로그램으로 변환하는 과정에서 위상 정보들이 손실되는 한계점이 지적되었다. 본 연구에서는 이러한 문제를 해결하기 위해 긴 시계열 신호 처리에서 좋은 성능을 보인 Dual-path Recurrent Neural Network을 기반으로, 다중 채널 센서로부터 생성된 입력신호인 3차원 텐서를 처리할 수 있도록 변형된 Tripple-path Recurrent Neural Network을 제안한다. 제안하는 기술은 먼저 다중 채널 입력 신호를 짧은 조각으로 분할하고 조각내 신호 간, 구성된 조각간, 그리고 채널 신호 간의 각각의 관계를 고려한 3차원 텐서를 생성하여 로컬 및 글로벌 특성을학습한다. 제안된 기법은, 기존 방법에 비해 개선된 Root Mean Square Error 값과 Scale Invariant Signal to Noise Ratio을 가짐을 확인하였다.

      • Analysis of Performance Parameters of Microstrip Low Pass Filter with Open Stub at 1.08 GHz Using Ann

        Vishakha Dayal Shrivastava,Vandana Vikas Thakare 보안공학연구지원센터 2016 International Journal of Signal Processing, Image Vol.9 No.11

        In the present paper analysis of performance parameters i.e., insertion loss and return loss of microstrip Low Pass Filter with open stub using Artificial Neural Networks has been presented. The Artificial neural network is used in predicting the performance parameters of the low pass filter with open stub as a function of its stub length. Levenberg –Marquardt training algorithms of FFBP-ANN. (feed forward back propagation Artificial Neural Network), Layer Recurrent-ANN and CFBP-ANN (cascaded forward back propagation Artificial Neural Network) has been used to implement the neural network models. Simulated values for training and testing the neural network are obtained by analysing the LPF structure by the use of CST Microwave Studio Software. Comparison of mean square error obtained from different ANN networks concluded that CFBP-ANN gives satisfactory result as compare to FFBP-ANN and Layer Recurrent ANN. The testing of output of neural model is found good agreement with simulated output.

      • KCI등재

        Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High-Resolution Spectral Features

        김형국,김진영 한국전자통신연구원 2017 ETRI Journal Vol.39 No.6

        Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception-based spatial and spectral-domain noise-reduced harmonic features are extracted from multichannel audio and used as high-resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short-term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

      • KCI등재

        양방향 LSTM 순환신경망 기반 주가예측모델

        주일택(Il-Taeck Joo),최승호(Seung-Ho Choi) 한국정보전자통신기술학회 2018 한국정보전자통신기술학회논문지 Vol.11 No.2

        본 논문에서는 시계열 데이터인 주가의 변동 패턴을 학습하고, 주가 가격을 예측하기 적합한 주가 예측 딥러닝 모델을 제시하고 평가하였다. 일반신경망에 시계열 개념이 추가되어 은닉계층에 이전 정보를 기억시킬 수 있는 순환신경망이 시계열 데이터인 주가 예측 모델로 적합하다. 순환신경망에서 나타나는 기울기 소멸문제를 해결하며, 장기의존성을 유지하기 위하여, 순환신경망의 내부에 작은 메모리를 가진 LSTM을 사용한다. 또한, 순환신경망의 시계열 데이터의 직전 패턴 기반으로만 학습하는 경향을 보이는 한계를 해결하기 위하여, 데이터의 흐름의 역방향에 은닉계층이 추가되는 양방향 LSTM 순환신경망을 이용하여 주가예측 모델을 구현하였다. 실험에서는 제시된 주가 예측 모델에 텐서플로우를 이용하여 주가와 거래량을 입력값으로 학습을 하였다. 주가예측의 성능을 평가하기 위해서, 실제 주가와 예측된 주가 간의 평균 제곱근 오차를 구하였다. 실험결과로는 단방향 LSTM 순환신경망보다, 양방향 LSTM 순환신경망을 이용한 주가예측 모델이 더 작은 오차가 발생하여 주가 예측 정확성이 향상되었다. In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

      • KCI등재

        이진 삼차 재귀 신경망과 유전자 알고리즘을 이용한 문맥-자유 문법의 추론

        정순호(Soon-Ho Jung) 한국컴퓨터정보학회 2012 韓國컴퓨터情報學會論文誌 Vol.17 No.3

        이 논문은 이진 삼차 재귀 신경망(Binary Third-order Recurrent Neural Networks: BTRNN)에 유전자 알고리즘을 적용하여 문맥-자유 문법을 추론하는 방법을 제안한다. BTRNN은 각 입력심볼에 대응되는 재귀 신경망들의 다층적 구조이고 외부의 스택과 결합된다. BTRNN의 매개변수들은 모두 이진수로 표현되며 상태 전이와 동시에 스택의 한 동작이 실행된다. 염색체로 표현된 BTRNN들에 유전자 알고리즘을 적용하여 긍정과 부정의 입력 패턴들의 문맥-자유 문법을 추론하는 최적의 BTRNN를 얻는다. 이 방법은 기존의 신경망 이용방법보다 적은 학습량과 적은 학습회수로 작거나 같은 상태 수를 갖는 BTRNN을 추론한다. 또한 문법 표현의 염색체 이용방법보다 parsing과정에서 결정적인 상태전이와 스택동작이 실행되므로 입력 패턴에 대한 인식처리 시간복잡도가 우수하다. 문맥-자유 문법의 비단말 심볼의 개수 p, 단말 심볼의 개수 q, 그리고 길이가 k인 문자열이 입력이 될 때, BTRNN의 최대 상태수가 m이라고 하면, BTRNN의 인식처리 병렬처리 시간은 O(k)이고 순차처리 시간은 O(km)이다. We present the method to infer Context-Free Grammars by applying genetic algorithm to the Binary Third-order Recurrent Neural Networks(BTRNN). BTRNN is a multiple-layered architecture of recurrent neural networks, each of which is corresponding to an input symbol, and is combined with external stack. All parameters of BTRNN are represented as binary numbers and each state transition is performed with any stack operation simultaneously. We apply Genetic Algorithm to BTRNN chromosomes and obtain the optimal BTRNN inferring context-free grammar of positive and negative input patterns. This proposed method infers BTRNN, which includes the number of its states equal to or less than those of existing methods of Discrete Recurrent Neural Networks, with less examples and less learning trials. Also BTRNN is superior to the recent method of chromosomes representing grammars at recognition time complexity because of performing deterministic state transitions and stack operations at parsing process. If the number of non-terminals is p, the number of terminals q, the length of an input string k, and the max number of BTRNN states m, the parallel processing time is O(k) and the sequential processing time is O(km).

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼