http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
남상하(Sangha Nam),김인철(Incheol Kim) Korean Institute of Information Scientists and Eng 2015 정보과학회논문지 Vol.42 No.5
In order to answer questions successfully on behalf of the human contestant in DeepQA environments such as ‘Jeopardy!’, the American quiz show, the computer needs to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a hybrid spatial reasoning algorithm, among various efficient spatial reasoning methods, for handling directional and topological relations. Our algorithm not only improves the query processing time while reducing unnecessary reasoning calculation, but also effectively deals with the change of spatial knowledge base, as it takes a hybrid method that combines forward and backward reasoning. Through experiments performed on the sample spatial knowledge base with the hybrid spatial reasoner of our algorithm, we demonstrated the high performance of our hybrid spatial reasoning algorithm.
CNN 기반 관계 추출 모델의 성능 향상을 위한 다중-어의 단어 임베딩 적용
남상하(Sangha Nam),한기종(Kijong Han),김은경(Eun-kyung Kim),권성구(Sunggoo Kwon),정유성(Yoosung Jung),최기선(Key-Sun Choi) Korean Institute of Information Scientists and Eng 2018 정보과학회논문지 Vol.45 No.8
The relation extraction task is to classify a relation between two entities in an input sentence and is important in natural language processing and knowledge extraction. Many studies have designed a relation extraction model using a distant supervision method. Recently the deep-learning based relation extraction model became mainstream such as CNN or RNN. However, the existing studies do not solve the homograph problem of word embedding used as an input of the model. Therefore, model learning proceeds with a single embedding value of homogeneous terms having different meanings; that is, the relation extraction model is learned without grasping the meaning of a word accurately. In this paper, we propose a relation extraction model using multi-sense word embedding. In order to learn multi-sense word embedding, we used a word sense disambiguation module based on the CoreNet concept, and the relation extraction model used CNN and PCNN models to learn key words in sentences.
김지호(Jiho Kim),남상하(Sangha Nam),최기선(Key-Sun Choi) Korean Institute of Information Scientists and Eng 2018 정보과학회논문지 Vol.45 No.9
The purpose of a knowledge base is to incorporate all the knowledge in the world in a format that machines can understand. In order for a knowledge base to be useful, it must continuously acquire and add new knowledge. However, it cannot if it lacks knowledge-acquisition ability. Knowledge is mainly acquired by analyzing natural language sentences. However, studies on internal knowledge acquisition are being neglected. In this paper, we introduce a non-negative matrix factorization method for knowledge base population. The model introduced in this paper transforms a knowledge base into a matrix and then learns the latent feature vector of each entity tuple and relation by decomposing the matrix and reassembling the vectors to score the reliability of the new knowledge. In order to demonstrate the effectiveness and superiority of our method, we present results of experiments and analysis performed with Korean DBpedia.