RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        S2‐Net: Machine reading comprehension with SRU‐based self‐matching networks

        박천음,이창기,홍린,황이규,유태준,장재용,홍윤기,배경훈,김현기 한국전자통신연구원 2019 ETRI Journal Vol.41 No.3

        Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short‐term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self‐matching network, used in R‐Net, can have a similar effect to coreference resolution because the self‐matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an S2‐Net model that adds a self‐matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed S2‐Net model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

      • KCI등재

        Mention Detection Using Pointer Networks for Coreference Resolution

        박천음,이창기,임수종 한국전자통신연구원 2017 ETRI Journal Vol.39 No.5

        A mention has a noun or noun phrase as its head and constructs a chunk that defines any meaning, including a modifier. Mention detection refers to the extraction of mentions from a document. In mentions, coreference resolution refers to determining any mentions that have the same meaning. Pointer networks, which are models based on a recurrent neural network encoder–decoder, outputs a list of elements corresponding to an input sequence. In this paper, we propose mention detection using pointer networks. This approach can solve the problem of overlapped mention detection, which cannot be solved by a sequence labeling approach. The experimental results show that the performance of the proposed mention detection approach is F1 of 80.75%, which is 8% higher than rule-based mention detection, and the performance of the coreference resolution has a CoNLL F1 of 56.67%(mention boundary), which is 7.68% higher than coreference resolution using rule-based mention detection.

      • KCI등재

        VS3‐NET: Neural variational inference model for machine‐reading comprehension

        박천음,이창기,송희준 한국전자통신연구원 2019 ETRI Journal Vol.41 No.6

        We propose the VS3‐NET model to solve the task of question answering questions with machine‐reading comprehension that searches for an appropriate answer in a given context. VS3‐NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit‐based sentences and self‐matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question‐type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine‐reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3‐NET model has an exact‐match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.

      • KCI등재

        Simple and effective neural coreference resolution for Korean language

        박천음,임준호,류지희,김현기,이창기 한국전자통신연구원 2021 ETRI Journal Vol.43 No.6

        We propose an end‐to‐end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head‐final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head‐final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language‐processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state‐of‐the‐art performance in Korean language coreference resolution.

      • KCI우수등재

        포지션 인코딩 기반 S³-Net를 이용한 한국어 기계 독해

        박천음,이창기,김현기 한국정보과학회 2019 정보과학회논문지 Vol.46 No.3

        S³-Net is a deep learning model that is used in machine reading comprehension question answering (MRQA) based on Simple Recurrent Unit and Self-Matching Networks that calculates attention weight for own RNN sequence. The answers to the questions in the MRQA occur within the passage, because any passage is made up of several sentences, so the length of the input sequence becomes longer and the performance deteriorates. In this paper, a hierarchical model that adds sentence-level encoding and S³-Net that applies position encoding to check word order information to solve the problem of long-term context degradation are proposed. The experimental results show that the S³-Net model proposed in this paper has a performance of 69.43% in EM and 81.53% in F1 for single test, and 71.28% in EM and 82.67 in F1 for ensemble test. S³-Net은 Simple Recurrent Unit (SRU)과 자기 자신의 RNN sequence에 대하여 어텐션 가중치(attention weight)를 계산하는 Self-Matching Networks를 기반으로 기계 독해 질의 응답을 해결하는 딥 러닝 모델이다. 기계 독해 질의 응답에서 질문에 대한 답은 문맥 내에서 발생하는데, 하나의 문맥은 여러 문장으로 이뤄지기 때문에 입력 시퀀스의 길이가 길어져 성능이 저하되는 문제가 있다. 본 논문에서는 이와 같이 문맥이 길어져 성능이 저하되는 문제를 해결하기 위하여 문장 단위의 인코딩을 추가한 계층 모델과, 단어 순서 정보를 확인하는 포지션 인코딩을 적용한 S³-Net을 제안한다. 실험 결과, 본 논문에서 제안한 S³-Net 모델이 한국어 기계 독해 데이터 셋에서 기존의 S²-Net보다 우수한(single test) EM 69.43%, F1 81.53%, (ensemble test) EM 71.28%, F1 82.67%의 성능을 보였다.

      • KCI우수등재

        Multi-resolution 포인터 네트워크를 이용한 상호참조해결

        박천음,이창기,김현기 한국정보과학회 2019 정보과학회논문지 Vol.46 No.4

        Multi-resolution RNN is a method of modeling parallel sequences as RNNs. Coreference resolution is a natural language processing task in which several words representing different entities present in a document are defined as one cluster and can be solved by a pointer network. The encoder input sequence of the coreference resolution becomes all the morphemes of the document using the pointer network, and the decoder input sequence becomes all the nouns present in the document. In this paper, we propose three multi-resolution pointer network models that encode all morphemes and noun lists of a document in parallel and perform decoding by using both encoded hidden states in a decoder. We have solved the coreference resolution based on the proposed models. Experimental results show that Multi-resolution1 of the proposed model has 71.44% CoNLL F1, 70.52% CoNLL F1 of Multi-resolution2 and 70.59% CoNLL F1 of Multi-resolution3. Multi-resolution RNN은 입력된 병렬 시퀀스를 RNN으로 모델링하는 방법이다. 상호참조해결은 문서 내에 등장하는 개체를 표현하는 여러 단어들을 하나의 클러스터로 정의하는 자연어처리 문제이며, 포인터 네트워크로 해결할 수 있다. 포인터 네트워크를 이용한 상호참조해결의 인코더 입력열은 문서의 모든 형태소가 되며, 디코더 입력열은 문서에서 등장한 모든 명사가 된다. 본 논문에서는, 인코더에서 문서의 모든 형태소와 문서의 명사 리스트를 병렬적으로 인코딩을 수행하고, 디코더에서 두 인코딩 히든 스테이트(hidden state)를 모두 사용하여 디코딩을 수행하는 Multi-resolution 포인터 네트워크 모델 3가지를 제안하고, 이를 기반으로 상호참조해결을 수행한다. 실험 결과, 본 논문에서 제안한 모델 중 Multi-resolution1, 2, 3 모델이 각각 CoNLL F1 71.78%, 71.30%, 72.70%의 성능을 보였다.

      • KCI등재

        Korean Coreference Resolution with Guided Mention Pair Model Using the Deep Learning

        박천음,최경호,이창기,임수종 한국전자통신연구원 2016 ETRI Journal Vol.38 No.6

        The general method of machine learning has encountered disadvantages in terms of the significant amount of time and effort required for feature extraction and engineering in natural language processing. However, in recent years, these disadvantages have been solved using deep learning. In this paper, we propose a mention pair (MP) model using deep learning, and a system that combines both rule-based and deep learning-based systems using a guided MP as a coreference resolution, which is an information extraction technique. Our experiment results confirm that the proposed deeplearning based coreference resolution system achieves a better level of performance than rule- and statistics-based systems applied separately.

      • KCI등재

        BERT 기반 Variational Inference와 RNN을 이용한 한국어 영화평 감성 분석

        박천음,이창기 한국정보과학회 2019 정보과학회 컴퓨팅의 실제 논문지 Vol.25 No.11

        최근 자연어처리 분야에서 많은 성능 향상을 보이고 있는 BERT는 양방향성을 가진 트랜스포머(transformer)를 기반으로 한 모델이다. BERT는 OOV (Out Of Vocabulary) 문제를 해결하기 위하여 BPE (Byte Pair Encoding)를 적용하며, 이를 기반으로 언어 모델을 사전 학습하고 출력 층(layer)을 추가하여 자연어처리 태스크를 fine-tuning한다. 감성 분석은 주어진 문장에 대한 잠재적 의미를 분석하고 분류하는 문제이다. 본 논문에서는 감성 분석에 BERT로부터 생성되는 토큰 표현을 이용하기 위하여 대용량 한국어 코퍼스로 언어 모델을 학습한 BERT 모델을 사용한다. 또한 문맥 정보를 인코딩하는 RNN을 BERT 함께 사용하는 방법과, RNN으로 인코딩한 hidden state에 variational inference를 이용하여 감성 분석을 수행하는 방법을 제안한다. BERT is a model based on a bidirectional transformer and has demonstrated immense improvement in performance in the field of natural language processing. BERT applies BPE (Byte Pair Encoding) to solve OOV (Out Of Vocabulary) problem, and pre-trains language model based on the BPE and fine-tunes natural language processing task by adding an output layer. Sentimental analysis is a task of analyzing and classifying the potential meaning of a given sentence. In the present work, we have employed the BERT model which trains the language model with large capacity Korean Corpus to use the token representation generated from BERT for sentimental analysis. In addition, we propose a method of using BERT along with RNN, which encodes context information and a method of performing sentimental analysis using variational inference in hidden state encoded by RNNv.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼