1 윤재민 ; 정유진 ; 이종혁, "육하원칙 활성화도를 이용한 신문기사 자동추출요약" 한국정보과학회 31 (31): 505-515, 2004
2 이경호 ; 박요한 ; 이공주, "신문기사와 소셜 미디어를 활용한 한국어 문서요약 데이터 구축" 한국정보처리학회 9 (9): 251-258, 2020
3 Alexis Conneau, "Unsupervised Cross-lingual Representation Learning at Scale" Association for Computational Linguistics 8440-8451, 2020
4 Jaewon Jeaon, "Two-step Document Summarization using Deep Learning and Maximal Marginal Relevance" 347-349, 2019
5 Yang Liu, "Text Summarization with Pretrained Encoders" Association for Computational Linguistics 3730-3740, 2019
6 Chin-Yew Lin, "Text Summarization Branches Out" Association for Computational Linguistics 74-81, 2004
7 D. Shen, "Text Summarization BT - Encyclopedia of Database Systems" Springer US 3079-3083, 2009
8 R. Nallapati, "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents" 2017
9 Y. Liu, "Roberta: A robustly optimized bert pretraining approach" 2019
10 Shashi Narayan, "Ranking Sentences for Extractive Summarization with Reinforcement Learning" Association for Computational Linguistics 1 : 1747-1759, 2018
1 윤재민 ; 정유진 ; 이종혁, "육하원칙 활성화도를 이용한 신문기사 자동추출요약" 한국정보과학회 31 (31): 505-515, 2004
2 이경호 ; 박요한 ; 이공주, "신문기사와 소셜 미디어를 활용한 한국어 문서요약 데이터 구축" 한국정보처리학회 9 (9): 251-258, 2020
3 Alexis Conneau, "Unsupervised Cross-lingual Representation Learning at Scale" Association for Computational Linguistics 8440-8451, 2020
4 Jaewon Jeaon, "Two-step Document Summarization using Deep Learning and Maximal Marginal Relevance" 347-349, 2019
5 Yang Liu, "Text Summarization with Pretrained Encoders" Association for Computational Linguistics 3730-3740, 2019
6 Chin-Yew Lin, "Text Summarization Branches Out" Association for Computational Linguistics 74-81, 2004
7 D. Shen, "Text Summarization BT - Encyclopedia of Database Systems" Springer US 3079-3083, 2009
8 R. Nallapati, "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents" 2017
9 Y. Liu, "Roberta: A robustly optimized bert pretraining approach" 2019
10 Shashi Narayan, "Ranking Sentences for Extractive Summarization with Reinforcement Learning" Association for Computational Linguistics 1 : 1747-1759, 2018
11 Tsutomu Hirao, "Oracle Summaries of Compressive Summarization" Association for Computational Linguistics 275-280, 2017
12 Qingyu Zhou, "Neural Document Summarization by Jointly Learning to Score and Select Sentences" Association for Computational Linguistics 654-663, 2018
13 AI Hub, "Manual for text summarization dataset"
14 Chenguang Zhu, "Leveraging Lead Bias for Zero-shot Abstractive News Summarization" Association for Computing Machinery 1462-1471, 2021
15 S. Park, "KLUE: Korean Language Understanding Evaluation, arXiv Prepr. arXiv2105. 09680"
16 A. Radford, "Improving language understanding by generative pre-training" 2018
17 Telmo Pires, "How Multilingual is Multilingual BERT?" Association for Computational Linguistics 4996-5001, 2019
18 Zi-Yi Dou, "GSum: A General Framework for Guided Neural Abstractive Summarization" Association for Computational Linguistics 4830-4842, 2021
19 Y. Liu, "Fine-tune BERT for extractive summarization, arXiv Prepr. arXiv1903.10318"
20 Wojciech Kryscinski, "Evaluating the Factual Consistency of Abstractive Text Summarization" Association for Computational Linguistics 9332-9346, 2020
21 E. Peters, "Deep Contextualized Word Representations" Association for Computational Linguistics 1 : 2227-2237, 2018
22 I. Loshchilov, "Decoupled Weight Decay Regularization" 2019
23 Matt Grenander, "Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses" Association for Computational Linguistics 6019-6024, 2019
24 Chris Kedzie, "Content Selection in Deep Learning Models of Summarization" Association for Computational Linguistics 1818-1828, 2018
25 Yue Dong, "BanditSum: Extractive Summarization as a Contextual Bandit" Association for Computational Linguistics 3739-3748, 201
26 Jacob Devlin, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" Association for Computational Linguistics 4171-4186, 2019