http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
인공지능과 감정 반응: 디지털휴먼을 통한 혐오표현 감소 연구
김정민(Jung-Min Kim),최정우(Jungwoo Choi),서경진(Kyoung-Chin Seo) 한국디지털콘텐츠학회 2024 한국디지털콘텐츠학회논문지 Vol.25 No.1
With an increase in the importance of digital human technology, the demand for human-like conversations has emerged. In this context, the chatbot “Lee Luda” was developed but caused user discomfort owing to its use of hate speech, prompting various studies on hate speech detection. However, the use of hate speech is rampant online. This study aims to address the problem from a different aspect than existing hate speech detection. The proposed method involved a digital human system designed to detect hate speech and convey discomfort to users, thereby facilitating emotional transfer and enhancing awareness of hate speech. To verify its effectiveness, a survey was conducted presenting the responses of the digital human to hate speech and scored 3.95±0.65 out of 5 points. Therefore, this method helped in recognizing the issues with hate speech. Based on the research findings, the introduced system is expected to be applicable not only to the problem of hate speech but also to moral, bias, and criminal issues and is expected to help users become aware of these problems.
스캔 기반의 게임 캐릭터로 만든 실사형 얼굴 랜드마크 데이터세트
최정우(Jungwoo Choi),김지훈(Ji-hoon Kim),최인호(Inho Choi),서경진(Kyoung-Chin Seo) 한국디지털콘텐츠학회 2022 한국디지털콘텐츠학회논문지 Vol.23 No.11
Deep learning solves problems with data built through large costs. Especially, it is difficult to obtain a persons face data due to problems such as portrait rights. In this paper, we propose a data collection method that shows the same performance as the existing facial landmark dataset using photo-realistic characters. First, a scan-based character is prepared and the background environment and camera conditions are implemented with a 3D game engine. Next, the character bone, the basis for facial movement, is converted into a reference landmark. Finally, after acquiring data from various angles and environments, training the facial landmark model, and comparing with the baseline model. From the experiment, the model trained with the proposed dataset showed similar results to the baseline model, even showed better results than our real people dataset. Through this, our dataset can obtain face data from various expressions and angles without restrictions, and landmark through character can make the correct answer without additional labor, so Effectiveness and scalability were confirmed.
김지훈(Ji-Hoon Kim),황대원(Dae-Won Hwang),서경진(Kyoung-Chin Seo) 한국디지털콘텐츠학회 2022 한국디지털콘텐츠학회논문지 Vol.23 No.1
Due to COVID-19, the demand for non-face-to-face services has increased, and accordingly, the demand for content using animations of 3D avatars has also increased. However, 3D animation content is expensive and cannot be easily produced. In this paper, we propose a script-based animation system that can create contents using 3D avatars. We develop tools for generating voice and lip sync data from text, assigning animation clips to match lines, and organizing content scenarios. In order to combine various animations through scripts, we built an animation set made in short motion units for each body part. We developed a system that can produce avatar content that speaks and operates according to the scripts instructions by creating a script that assigns animations to lines according to the defined gesture steps. We show that avatar content can be produced efficiently through our demonstration that 3D avatar can interact with users.