http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
권오욱,신종훈,서영애,임수종,허정,이기영,O.W. Kwon,J.H. Shin,Y.A. Seo,S.J. Lim,J. Heo,K.Y. Lee 한국전자통신연구원 2023 전자통신동향분석 Vol.38 No.6
Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multistep problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.
권오욱,이기영,이요한,노윤형,조민수,황금하,임수종,최승권,김영길,Kwon, O.W.,Lee, K.Y.,Lee, Y.H.,Roh, Y.H.,Cho, M.S.,Huang, J.X.,Lim, S.J.,Choi, S.K.,Kim, Y.K. 한국전자통신연구원 2021 전자통신동향분석 Vol.36 No.1
In this study, we introduce trends in and the future of digital personal assistants. Recently, digital personal assistants have begun to handle many tasks like humans by communicating with users in human language on smart devices such as smart phones, smart speakers, and smart cars. Their capabilities range from simple voice commands and chitchat to complex tasks such as device control, reservation, ordering, and scheduling. The digital personal assistants of the future will certainly speak like a person, have a person-like personality, see, hear, and analyze situations like a person, and become more human. Dialogue processing technology that makes them more human-like has developed into an end-to-end learning model based on deep neural networks in recent years. In addition, language models pre-trained from a large corpus make dialogue processing more natural and better understood. Advances in artificial intelligence such as dialogue processing technology will enable digital personal assistants to serve with more familiar and better performance in various areas.
권오욱,김회린,유창동,김봉완,이용주,Kwon Oh-Wook,Kim Hoi-Rin,Yoo Changdong,Kim Bong-Wan,Lee Yong-Ju 대한음성학회 2004 말소리 Vol.51 No.-
For educational and research purposes, a Korean speech recognition platform is designed. It is based on an object-oriented architecture and can be easily modified so that researchers can readily evaluate the performance of a recognition algorithm of interest. This platform will save development time for many who are interested in speech recognition. The platform includes the following modules: Noise reduction, end-point detection, met-frequency cepstral coefficient (MFCC) and perceptually linear prediction (PLP)-based feature extraction, hidden Markov model (HMM)-based acoustic modeling, n-gram language modeling, n-best search, and Korean language processing. The decoder of the platform can handle both lexical search trees for large vocabulary speech recognition and finite-state networks for small-to-medium vocabulary speech recognition. It performs word-dependent n-best search algorithm with a bigram language model in the first forward search stage and then extracts a word lattice and restores each lattice path with a trigram language model in the second stage.
권오욱,최승권,노윤형,김영길,박전규,이윤근,Kwon, O.W.,Choi, S.K.,Roh, Y.H.,Kim, Y.K.,Park, J.G.,Lee, Y.K. 한국전자통신연구원 2015 전자통신동향분석 Vol.30 No.4
모바일 혁명 빅데이터와 사물인터넷 시대에 접어들면서 인간의 음성과 말로 다양한 장치와 서비스를 제어하고 이용하는 것은 당연시되고 있다. 음성대화처리 기술은 인간 중심의 자유로운 발화를 인식하고 이해 및 처리하는 방향으로 발전하게 될 것이다. 본고에서는 현재 음성대화처리 기술 국내외 기술 및 산업 동향과 지식재산권 동향을 살펴보고, 인간 중심의 자유발화형 음성대화처리 기술 개념과 발전방향에 대해 기술한다.
권오욱,홍택규,황금하,노윤형,최승권,김화연,김영길,이윤근,Kwon, O.W.,Hong, T.G.,Huang, J.X.,Roh, Y.H.,Choi, S.K.,Kim, H.Y.,Kim, Y.K.,Lee, Y.K. 한국전자통신연구원 2019 전자통신동향분석 Vol.34 No.4
In this study, we introduce trends in neural-network-based deep learning research applied to dialogue systems. Recently, end-to-end trainable goal-oriented dialogue systems using long short-term memory, sequence-to-sequence models, among others, have been studied to overcome the difficulties of domain adaptation and error recognition and recovery in traditional pipeline goal-oriented dialogue systems. In addition, some research has been conducted on applying reinforcement learning to end-to-end trainable goal-oriented dialogue systems to learn dialogue strategies that do not appear in training corpora. Recent neural network models for end-to-end trainable chit-chat systems have been improved using dialogue context as well as personal and topic information to produce a more natural human conversation. Unlike previous studies that have applied different approaches to goal-oriented dialogue systems and chit-chat systems respectively, recent studies have attempted to apply end-to-end trainable approaches based on deep neural networks in common to them. Acquiring dialogue corpora for training is now necessary. Therefore, future research will focus on easily and cheaply acquiring dialogue corpora and training with small annotated dialogue corpora and/or large raw dialogues.
권오욱,권석봉,장규철,윤성락,김용래,장광동,김회린,유창동,김봉완,이용주,Kwon Oh-Wook,Kwon Sukbong,Jang Gyucheol,Yun Sungrack,Kim Yong-Rae,Jang Kwang-Dong,Kim Hoi-Rin,Yoo Changdong,Kim Bong-Wan,Lee Yong-Ju 한국음향학회 2005 韓國音響學會誌 Vol.24 No.8
We introduce a Korean speech recognition platform (ECHOS) developed for education and research Purposes. ECHOS lowers the entry barrier to speech recognition research and can be used as a reference engine by providing elementary speech recognition modules. It has an easy simple object-oriented architecture, implemented in the C++ language with the standard template library. The input of the ECHOS is digital speech data sampled at 8 or 16 kHz. Its output is the 1-best recognition result. N-best recognition results, and a word graph. The recognition engine is composed of MFCC/PLP feature extraction, HMM-based acoustic modeling, n-gram language modeling, finite state network (FSN)- and lexical tree-based search algorithms. It can handle various tasks from isolated word recognition to large vocabulary continuous speech recognition. We compare the performance of ECHOS and hidden Markov model toolkit (HTK) for validation. In an FSN-based task. ECHOS shows similar word accuracy while the recognition time is doubled because of object-oriented implementation. For a 8000-word continuous speech recognition task, using the lexical tree search algorithm different from the algorithm used in HTK, it increases the word error rate by $40\%$ relatively but reduces the recognition time to half. 교육 및 연구 목적을 위하여 개발된 한국어 음성인식 플랫폼인 ECHOS를 소개한다. 음성인식을 위한 기본 모듈을 제공하는 BCHOS는 이해하기 쉽고 간단한 객체지향 구조를 가지며, 표준 템플릿 라이브러리 (STL)를 이용한 C++ 언어로 구현되었다. 입력은 8또는 16 kHz로 샘플링된 디지털 음성 데이터이며. 출력은 1-beat 인식결과, N-best 인식결과 및 word graph이다. ECHOS는 MFCC와 PLP 특징추출, HMM에 기반한 음향모델, n-gram 언어모델, 유한상태망 (FSN)과 렉시컬트리를 지원하는 탐색알고리듬으로 구성되며, 고립단어인식으로부터 대어휘 연속음성인식에 이르는 다양한 태스크를 처리할 수 있다. 플랫폼의 동작을 검증하기 위하여 ECHOS와 hidden Markov model toolkit (HTK)의 성능을 비교한다. ECHOS는 FSN 명령어 인식 태스크에서 HTK와 거의 비슷한 인식률을 나타내고 인식시간은 객체지향 구현 때문에 약 2배 정도 증가한다. 8000단어 연속음성인식에서는 HTK와 달리 렉시컬트리 탐색 알고리듬을 사용함으로써 단어오류율은 $40\%$ 증가하나 인식시간은 0.5배로 감소한다.
LHMM기반 영어 형태소 품사 태거의 도메인 적응 방법
권오욱(Oh-Woog Kwon),김영길(Young-Gil Kim) 한국정보과학회 2010 정보과학회 컴퓨팅의 실제 논문지 Vol.16 No.10
형태소 품사 태거는 언어처리 시스템의 전처리기로 많이 활용되고 있다. 형태소 품사 태거의 성능 향상은 언어처리 시스템의 전체 성능 향상에 크게 기여할 수 있다. 자동번역과 같이 복잡도가 높은 언어처리 시스템은 최근 특정 도메인에서 좋은 성능을 나타내는 시스템을 개발하고 자 한다. 본 논문에서는 기존 일반도메인에서 학습된 LHMM 이나 HMM 기반의 영어 형태소 품사 태거를 특정 도메인에 적응하여 높은 성능을 나타내는 방법을 제안한다. 제안하는 방법은 특정도메인에 대한 원시코퍼스를 이용하여 HMM이나 LHMM의 기학습된 전이확률과 출력확률을 도메인에 적합하게 반자동으로 변경하는 도메인 적응 방법이다. 특허도메인에 적응하는 실험을 통하여 단어단위 태깅 정확률 98.87%와 문장단위 태깅 정확률 78.5%의 성능을 보였으며, 도메인 적응하지 않은 형태소 태거보다 단어단위 태깅 정확률 2.24% 향상(ERR: 66.4%)과 문장단위 태깅 정확률 41.0% 향상(ERR: 65.6%)을 보였다. A large number of current language processing systems use a part-of-speech tagger for preprocessing. Most language processing systems required a tagger with the highest possible accuracy. Specially, the use of domain-specific advantages has become a hot issue in machine translation community to improve the translation quality. This paper addresses a method for customizing an HMM or LHMM based English tagger from general domain to specific domain. The proposed method is to semi-automatically customize the output and transition probabilities of HMM or LHMM using domain-specific raw corpus. Through the experiments customizing to Patent domain, our LHMM tagger adapted by the proposed method shows the word tagging accuracy of 98.87% and the sentence tagging accuracy of 78.5%. Also, compared with the general tagger, our tagger improved the word tagging accuracy of 2.24% (ERR: 66.4%) and the sentence tagging accuracy of 41.0% (ERR: 65.6%).