RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색

        최수연,박종열 국제문화기술진흥원 2023 The Journal of the Convergence on Culture Technolo Vol.9 No.1

        This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search. 본 논문은 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 모델 설계를 제안한다. 딥 러닝은 블랙박스로 학습이 진행되는 특성으로 인해 설계한 모델이 최적화된 성능을 가지는 구조인지 검증하지 못하는 문제점이 존재한다. 신경망 구조 탐색 모델은 모델을 생성하는 순환 신경망과 생성된 네트워크인 합성곱 신경망으로 구성되어있다. 통상의 신경망 구조 탐색 모델은 순환신경망 계열을 사용하지만 우리는 본 논문에서 순환신경망 대신 그래프 합성곱 신경망을 사용하여 합성곱 신경망 모델을 생성하는 GC-NAS를 제안한다. 제안하는 GC-NAS는 Layer Extraction Block을 이용하여 Depth를 탐색하며 Hyper Parameter Prediction Block을 이용하여 Depth 정보를 기반으로 한 spatial, temporal 정보(hyper parameter)를 병렬적으로 탐색합니다. 따라서 Depth 정보를 반영하기 때문에 탐색 영역이 더 넓으며 Depth 정보와 병렬적 탐색을 진행함으로 모델의 탐색 영역의 목적성이 분명하기 때문에 GC-NAS대비 이론적 구조에 있어서 우위에 있다고 판단된다. GC-NAS는 그래프 합성곱 신경망 블록 및 그래프 생성 알고리즘을 통하여 기존 신경망 구조 탐색 모델에서 순환 신경망이 가지는 고차원 시간 축의 문제와 공간적 탐색의 범위 문제를 해결할 것으로 기대한다. 또한 우리는 본 논문이 제안하는 GC-NAS를 통하여 신경망 구조 탐색에 그래프 합성곱 신경망을 적용하는 연구가 활발히 이루어질 수 있는 계기가 될 수 있기를 기대한다.

      • KCI등재

        감정평가에 기반한 환경과 행동패턴 학습을 위한 궤환 모듈라 네트워크

        김성주(Seong-Joo Kim),최우경(Woo-Kyung Choi),김용민(Yong-Min Kim),전홍태(Hong-Tae Jeon) 한국지능시스템학회 2004 한국지능시스템학회논문지 Vol.14 No.1

        감정은 지능을 지닌 존재의 이성판단에 영향을 준다. 그래서 주변 환경정보에 의해 평가된 기본적이고 보편적인 감정을 로봇에 가미하면 더욱 인간과 가까운 지능 로봇이 될 것이다. 그러나 인간의 감정을 학습하기 위해서는 다양한 감각정보의 학습과 패턴 분류가 선행되어야 하고 이를 위해서 적합한 네트워크 구조가 요구된다. 신경망은 시스템의 특징을 추출하는데 매우 우수한 능력을 발휘하고 있다. 그러나 일시적 혼선현상과 지역 최소치에 수렴하는 단점이 있다. 그래서 복잡한 문제를 단순한 여러 개의 부분적인 문제로 나누어 해결하는 모듈라 설계방법이 관심의 대상이 되고 있다. 본 논문에서는 수많은 감정평가와 학습 데이터 패턴들을 학습하기 위해서 재결합과 재구성에 탁월한 성능을 지닌 Jacobs와 Jordan이 제안한 모듈라 네트워크와 상황의 재 표현이 가능하고 예측값과 모델링에 적합한 특징을 지닌 궤환 신경망을 결합하였다. 구성된 구조는 기존의 모듈라 네트워크의 학습결과와 비교 검토하였다. Rational sense is affected by emotion. If we add the factor of estimated emotion by environment information into robots, we may get more intelligent and human-friendly robots. However, various sensory information and pattern classification are prescribed for robots to learn emotion so that the networks are suitable for the necessity of robots. Neural network has superior ability to extract character of system but neural network has defect of temporal cross talk and local minimum convergence. To solve the defects, many kinds of modular neural networks have been proposed because they divide a complex problem into simple several sub-problems. The modular neural network, introduced by Jacobs and Jordan, shows an excellent ability of re-composition and re-combination of complex work. On the other hand, the recurrent network acquires state representations and representations of state make the recurrent neural network suitable for diverse applications such as nonlinear prediction and modeling. In this paper, we applied recurrent network for the expert network in the modular neural network structure to learn data pattern based on emotional assessment. To show the performance of the proposed network, simulation of learning the environment and behavior pattern is proceeded with the real time implementation. The given problem is very complex and has too many cases to learn. The result will show the performance and good ability of the proposed network and will be compared with the result of other method, general modular neural network.

      • Analysis of Performance Parameters of Microstrip Low Pass Filter with Open Stub at 1.08 GHz Using Ann

        Vishakha Dayal Shrivastava,Vandana Vikas Thakare 보안공학연구지원센터 2016 International Journal of Signal Processing, Image Vol.9 No.11

        In the present paper analysis of performance parameters i.e., insertion loss and return loss of microstrip Low Pass Filter with open stub using Artificial Neural Networks has been presented. The Artificial neural network is used in predicting the performance parameters of the low pass filter with open stub as a function of its stub length. Levenberg –Marquardt training algorithms of FFBP-ANN. (feed forward back propagation Artificial Neural Network), Layer Recurrent-ANN and CFBP-ANN (cascaded forward back propagation Artificial Neural Network) has been used to implement the neural network models. Simulated values for training and testing the neural network are obtained by analysing the LPF structure by the use of CST Microwave Studio Software. Comparison of mean square error obtained from different ANN networks concluded that CFBP-ANN gives satisfactory result as compare to FFBP-ANN and Layer Recurrent ANN. The testing of output of neural model is found good agreement with simulated output.

      • KCI등재

        3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발

        이상헌,정동규,유재석 한국음향학회 2023 韓國音響學會誌 Vol.42 No.4

        In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network’s success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method. 다양한 신호가 혼합된 수중 신호로부터 각각의 신호를 분리하는 기술은 오랫동안 연구되어왔지만, 낮은 품질의 수중 신호의 특성 상 쉽게 해결되지 않는 문제이다. 현재 주로 사용되는 방법은 Short-time Fourier transform을 사용하여 수신된 음향신호의 스펙트로그램을 얻은 뒤, 주파수의 특성을 분석하여 신호를 분리하는 기술이다. 하지만 매개변수의 최적화가 까다롭고, 스펙트로그램으로 변환하는 과정에서 위상 정보들이 손실되는 한계점이 지적되었다. 본 연구에서는 이러한 문제를 해결하기 위해 긴 시계열 신호 처리에서 좋은 성능을 보인 Dual-path Recurrent Neural Network을 기반으로, 다중 채널 센서로부터 생성된 입력신호인 3차원 텐서를 처리할 수 있도록 변형된 Tripple-path Recurrent Neural Network을 제안한다. 제안하는 기술은 먼저 다중 채널 입력 신호를 짧은 조각으로 분할하고 조각내 신호 간, 구성된 조각간, 그리고 채널 신호 간의 각각의 관계를 고려한 3차원 텐서를 생성하여 로컬 및 글로벌 특성을학습한다. 제안된 기법은, 기존 방법에 비해 개선된 Root Mean Square Error 값과 Scale Invariant Signal to Noise Ratio을 가짐을 확인하였다.

      • KCI등재

        딥러닝의 모형과 응용사례

        안성만(Ahn, SungMahn) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.2

        Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for “backward propagation of errors” and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer’s) neurons. Shared weights mean that we’re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren’t just propagated backward through layers, they’re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when traini

      • KCI등재

        양방향 LSTM 순환신경망 기반 주가예측모델

        주일택(Il-Taeck Joo),최승호(Seung-Ho Choi) 한국정보전자통신기술학회 2018 한국정보전자통신기술학회논문지 Vol.11 No.2

        본 논문에서는 시계열 데이터인 주가의 변동 패턴을 학습하고, 주가 가격을 예측하기 적합한 주가 예측 딥러닝 모델을 제시하고 평가하였다. 일반신경망에 시계열 개념이 추가되어 은닉계층에 이전 정보를 기억시킬 수 있는 순환신경망이 시계열 데이터인 주가 예측 모델로 적합하다. 순환신경망에서 나타나는 기울기 소멸문제를 해결하며, 장기의존성을 유지하기 위하여, 순환신경망의 내부에 작은 메모리를 가진 LSTM을 사용한다. 또한, 순환신경망의 시계열 데이터의 직전 패턴 기반으로만 학습하는 경향을 보이는 한계를 해결하기 위하여, 데이터의 흐름의 역방향에 은닉계층이 추가되는 양방향 LSTM 순환신경망을 이용하여 주가예측 모델을 구현하였다. 실험에서는 제시된 주가 예측 모델에 텐서플로우를 이용하여 주가와 거래량을 입력값으로 학습을 하였다. 주가예측의 성능을 평가하기 위해서, 실제 주가와 예측된 주가 간의 평균 제곱근 오차를 구하였다. 실험결과로는 단방향 LSTM 순환신경망보다, 양방향 LSTM 순환신경망을 이용한 주가예측 모델이 더 작은 오차가 발생하여 주가 예측 정확성이 향상되었다. In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

      • KCI등재

        유한요소해석과 순환신경망을 활용한 하중 예측

        강정호 한국산업융합학회 2024 한국산업융합학회 논문집 Vol.27 No.1

        Artificial Neural Networks that enabled Artificial Intelligence are being used in many fields. However, the application to mechanical structures has several problems and research is incomplete. One of the problems is that it is difficult to secure a large amount of data necessary for learning Artificial Neural Networks. In particular, it is important to detect and recognize external forces and forces for safety working and accident prevention of mechanical structures. This study examined the possibility by applying the Current Neural Network of Artificial Neural Networks to detect and recognize the load on the machine. Tens of thousands of data are required for general learning of Recurrent Neural Networks, and to secure large amounts of data, this paper derives load data from ANSYS structural analysis results and applies a stacked auto-encoder technique to secure the amount of data that can be learned. The usefulness of Stacked Auto-Encoder data was examined by comparing Stacked Auto-Encoder data and ANSYS data. In addition, in order to improve the accuracy of detection and recognition of load data with a Recurrent Neural Network, the optimal conditions are proposed by investigating the effects of related functions.

      • Recurrent Neural Network-Based Model Predictive Control for Waypoint Tracking

        Ying Shuai Quan,Woo Young Choi,Seung-Hi Lee,Chung Choo Chung 한국자동차공학회 2019 한국자동차공학회 부문종합 학술대회 Vol.2019 No.5

        This paper presents an recurrent neural network-based model predictive control for an autonomous driving vehicle. Model predictive control is effective in vehicle lateral control but too computationally expensive to be applied in real-time control. To resolve this problem, we propose a recurrent neural network-based approximate model predictive control. The offline-trained neural network exhibits the ability to model the waypoint tracking system and provided the closed-loop performance. The performance of the approximate recurrent neural network-model predictive control (RNN-MPC) is validated by computational experiments of waypoints tracking control scheme.

      • KCI우수등재

        Recurrent Neural Network를 이용한 이미지 캡션 생성

        이창기(Changki Lee) Korean Institute of Information Scientists and Eng 2016 정보과학회논문지 Vol.43 No.8

        Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

      • KCI등재

        순환신경망을 이용한 역모델 기반의 DC모터 전류 제어

        백동민,조현민 제어·로봇·시스템학회 2024 제어·로봇·시스템학회 논문지 Vol.30 No.1

        This paper presents a method for microcontrollers control using a recurrent neural network-based inverse model. Limited computational power in microcontrollers makes applying complex neural network structures challenging. We use the Elman network with a simple structure as an inverse model to address this issue. Elman network was used to model nonlinear control systems. The proposed method constructs a recurrent neural network-based inverse model in parallel to enhance the performance of the PID controller. The recurrent neural network uses the output generated by the PID controller as the past control input and compensates for the control inputs generated by the PID controller. We applied the proposed controller to a DC motor current control system and compared its performance with the PID controller that uses a deep neural network as an inverse model. We evaluated the control performance by applying a sine wave. The results show that the proposed controller has better tracking performance at 1, 3, and 5 Hz than the other controllers.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼