RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        사용자의 성향 기반의 얼굴 표정을 통한 감정 인식률 향상을 위한 연구

        이종식 ( Jong Sik Lee ),신동희 ( Dong Hee Shin ) 한국감성과학회 2014 감성과학 Vol.17 No.1

        Despite the huge potential of the practical application of emotion recognition technologies, the enhancement of the technologies still remains a challenge mainly due to the difficulty of recognizing emotion. Although not perfect, human emotions can be recognized through human images and sounds. Emotion recognition technologies have been researched by extensive studies that include image-based recognition studies, sound-based studies, and both image and sound-based studies. Studies on emotion recognition through facial expression detection are especially effective as emotions are primarily expressed in human face. However, differences in user environment and their familiarity with the technologies may cause significant disparities and errors. In order to enhance the accuracy of real-time emotion recognition, it is crucial to note a mechanism of understanding and analyzing users` personality traits that contribute to the improvement of emotion recognition. This study focuses on analyzing users` personality traits and its application in the emotion recognition system to reduce errors in emotion recognition through facial expression detection and improve the accuracy of the results. In particular, the study offers a practical solution to users with subtle facial expressions or low degree of emotion expression by providing an enhanced emotion recognition function.

      • KCI등재

        이색 시문에 나타난 불교인식의 두 방향과 표현 정서의 양상

        전재강 한국언어문학회 2018 한국언어문학 Vol.107 No.-

        This dissertation is written in order to research for Lee-sack’s literary work-poetry and prose- in view point of Buddhist recognition and emotion. Firstly, in the aspect of Lee-sack’s literary work in view point of Buddhist recognition, Lee-sack expressed two kinds of Buddhist recognition: official and negative recognition against Buddhism, personal and positive recognition to Buddhism. Secondly, in the aspect of Lee-sack’s literary work in view point of emotion, Lee-sack presented various kinds of emotion in his poetry: the emotion of split, the emotion of longing, the emotion of heeling, the emotion of delight. Here on one hand the emotion of split and the emotion of longing have stemmed from official and negative recognition against Buddhism, on the other hand the emotion of heeling and the emotion of delight have stemmed from personsl and positive recognition to Buddhism. Although I studied Lee-sack’s literary work-poetry and prose- in view point of Buddhist recognition and emotion, there could be the other assignments to study about these similar kinds of theme. I might continue researching for the other kinds of theme, for example, Lee jaehyon’s literary work, Jeong mongjoo’s literary work, Jeong dojeon’s literary work and so on next time soon.

      • KCI등재

        Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

        Miracle Udurume,Angela Caliwag,임완수,김귀곤 한국정보통신학회 2022 Journal of information and communication convergen Vol.20 No.3

        Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and

      • KCI등재

        Neuro-facial Fusion for Emotion AI: Improved Federated Learning GAN for Collaborative Multimodal Emotion Recognition

        D. Saisanthiya,P. Supraja 대한전자공학회 2024 IEIE Transactions on Smart Processing & Computing Vol.13 No.1

        In the context of artificial intelligence technology, an emotion recognition (ER) has numerous roles in human lives. On the other hand, the emotion recognition techniques most currently used perform poorly in recognizing emotions, which limits their wide spread use in practical applications. A Collaborative Multimodal Emotion Recognition through Improved Federated Learning Generative Adversarial Network (MER-IFLGAN) for facial expressions and electro encephalogram (EEG) signals was proposed to reduce this issue. Multi-resolution binarized image feature extraction (MBIFE) was initially used for facial expression feature extraction. The EEG features were extracted using the Dwarf Mongoose Optimization (DMO) algorithm. Finally, IFLGAN completes the Emotion recognition task. The proposed technique was simulated in MATLAB. The proposed technique achieved 25.45% and 19.71% higher accuracy and a 32.01% and 39.11% shorter average processing time compared to the existing models, like EEG based Cross-subject and Cross-modal Model (CSCM) for Multimodal Emotion Recognition (MERCSCM) and Long-Short Term Memory Model (LSTM) for EEG Emotion Recognition (MERLSTM), respectively. The experimental results of the proposed model shows that complementing EEG signals with the features of facial expression could identify four types of emotions: happy, sad, fear, and neutral. Further more, the IFLGAN classifier can enhance the capacity of multimodal emotion recognition.

      • Speech and emotion recognition using a multi-output model

        Min Dong Jin,Jongho Won,Deok-Hwan Kim 한국차세대컴퓨팅학회 2022 한국차세대컴퓨팅학회 학술대회 Vol.2022 No.10

        Voice language, the primary way of human communication, delivers not only verbal information but also emotional information through various characteristics such as voice intonation, height, and surrounding environment. Currently, many studies focus on grasping emotion and speech recognition on voice for human-computer interaction and are developing deep neural network models by extracting various frequency characteristics of speech. Representatives of these speech-based deep learning algorithms include speech recognition, namely speech-to-text or automatic speech recognition, and speech emotion recognition. The development of these two algorithms has been developed for a long time, but multi-output algorithms that process them in parallel at the same time are rare. This paper introduces a multi-output model that recognizes speech and emotion in one voice, thinking that simultaneously understanding language and emotion, which are the most critical information in a human voice, will significantly help human-computer interaction. This model confirmed that there was no significant difference between the training of the language and emotion recognition models separately, with a word error rate of 6.59% in the speech recognition section and an accuracy of 79.67% on average in the emotion recognition section.

      • KCI등재

        중학생의 정서인식능력에 영향을 미치는 요인에 관한 연구

        강다윤 ( Kang Dayoun ),박정윤 ( Park Jeong Yoon ) 한국아동권리학회 2024 아동과 권리 Vol.28 No.2

        연구목적: 본 연구에서는 청소년의 정서인식능력에 영향을 주는 요인들을 탐색하고자 한다. 방법: 한국아동·청소년 패널조사 (Korean Children and Youth Panel Survey)의 2018의 제 4차년도 자료를 활용하였다. 연구대상은 2,132명의 중학교 1학년이며, 수집된 자료는 SPSS 26.0을 통해 분석되었다. 결과: 부모변인과 부모의 상호작용변인 모두 청소년의 정서인식능력에 유의미하게 영향을 주는 것을 확인하였다. 부모변인 중 부모의 긍정적 양육태도와 부정적 양육태도 모두 청소년의 정서인식능력에 유의미하게 영향을 주는 것을 확인하였고, 또한 부모의 정서인식능력은 청소년의 정서인식능력에 가장 큰 영향을 미치는 변인으로 나타났다. 부모의 상호작용변인 중 부모님과의 대화시간과 부모님과 함께 보내는 시간은 청소년의 정서인식능력에 영향을 미치는 것으로 나타났다. 결론: 청소년의 정서인식능력에 가장 큰 영향을 미치는 변인은 부모의 정서인식능력으로 이는 부모 스스로가 정서인식 능력의 중요성을 인식하고, 이를 개발 및 향상시키기 위한 노력이 필요함을 시사한다. Objectives: This study aims to explore the factors that affect adolescents' ability to recognize emotions. Method: Data from the 4th year(2021) of the 2018 Korean Children and Youth Panel Survey were used. The participants included 2,132 first grade middle school students. Data analysis was conducted using SPSS 26.0. Results: Both parental factors and parental interaction factors were found to significantly influence adolescents' emotion recognition abilities. First, among parental factors, both positive and negative parenting attitudes were found to significantly affect adolescents' emotion recognition abilities. Additionally, parental emotion recognition abilities emerged as the most influential factor on adolescents' emotion recognition abilities. Second, among parental interaction factors, time spent conversing with parents and time spent together with parents were found to influence adolescents' emotion recognition abilities. Conclusions: The most significant factor influencing adolescents' emotion recognition abilities is parental emotion recognition abilities. This suggests the importance for parents to recognize the significance of emotion recognition and to develop and enhance these skills.

      • KCI등재

        정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구

        박미숙 ( Mi Sook Park ),박지은 ( Ji Eun Park ),손진훈 ( Jin Hun Sohn ) 한국감성과학회 2011 감성과학 Vol.14 No.1

        Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin`s theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James` theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin`s and James` theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

      • KCI등재

        정서인식 프로그램이 유아의 정서지능, 정서조망수용 및 사회적 능력에 미치는 영향

        황금선(Kum-sun Whang),조한익(Han-ik Cho) 한국교육심리학회 2016 敎育心理硏究 Vol.30 No.2

        본 연구는 정서인식 프로그램이 유아의 정서지능, 정서조망수용 및 사회적 능력에 미치는 영향은 어떠한가, 집단과 성별간에는 상호작용효과가 있는가를 살펴보는데 목적이 있다. 연구대상은 만 4, 5세 유아들로 실험집단 41명, 통제집단 40명 총 81명으로 구성되었다. 측정도구는 정서지능검사, 정서조망수용능력검사, 친사회적 행동 검사의 유아평가와 정서지능검사, 사회적 유능감 검사의 교사평가로 이루어졌다. 정서인식 프로그램은 ADDIE 모형에 기초하여 개발되었으며 Izard의 10가지 기본 정서인 기쁨, 슬픔, 분노, 공포, 놀람, 흥미, 수치심, 죄책감, 혐오, 모욕을 바탕으로 주제를 선정하고 활동내용을 구성하였다. 연구결과 유아평가에서 정서인식 프로그램은 유아의 정서인식능력을 높이는 것으로 나타났고, 자신과 타인의 정서조망수용능력을 높이는 것으로 나타났으며, 친사회적 행동을 높이는 것으로 나타났다. 교사평가에서 정서인식 프로그램은 유아의 정서지능 중에서 자기 정서인식 및 표현, 타인 정서인식 및 배려, 자기정서이용, 교사 및 또래와의 대인관계 등을 높이는 것으로 나타났고, 사회적 유능감에서는 정서성을 높이는 것으로 나타났다. 정서인식 프로그램이 유아의 정서지능, 정서조망수용 및 사회적 능력에 미치는 영향에서 집단과 성별의 상호작용 효과는 없는 것으로 나타났다. 이러한 결과를 바탕으로 선행연구와의 관련성, 본 연구의 교육적 시사점, 제한점 등을 논의하였다. The purpose of the study is to analyze the effect of emotion recognition program on emotional intelligence, emotion perspective taking and social ability of young children. The subjects were 81 four- and five-year old children in which experimental group were placed 40 and control group were assigned 41 children. The instruments used in the study were emotional intelligence, emotion perspective taking, and pro-social behaviors measured by young children, and emotional intelligence and social competence measured by the teachers who were in charge of the children. The emotion recognition program was developed by the researchers based on ADDIE model and included basic emotions suggested by IZARD which were joy, distress, anger, fear, surprise, interest, shame, guilt, disgust and contempt. A close look at statistics revealed that emotion recognition program increased emotion recognition ability, emotion perspective of self and others, and pro-social behaviors. In the results of teacher evaluation, emotion recognition program revealed to increase emotion recognition and expression, utilization of emotion, regulation of emotion of self and others, and emotionality. There were no interaction effects between groups and genders in the implementation of emotion recognition program. Finally, researchers provided a discussion on the research findings, educational implications and future research directions.

      • KCI등재

        이미지 색상, 명도, 채도 감성컴퓨팅의 유사성 검증 연구

        이연란(Lee, Yean-Ran) 한국만화애니메이션학회 2015 만화애니메이션연구 Vol.- No.40

        사람의 이미지 감성인식은 각기 다른 성향으로 표현된다. 현재는 감성인식을 객관적으로 평가하려는 감성컴퓨팅 연구가 활발하게 연구되고 있다. 그렇지만 기존의 감성컴퓨팅 연구는 실행에 많은 문제점을 갖고 있다. 첫째, 감성인식 면에서 비객관적이고, 부정확하다. 둘째, 감성인식의 상관관계가 불명확한 점이다. 그리하여 본 연구의 필요성으로 이미지 감성의 규칙성을 실험하여 감성컴퓨팅 방식으로 제어하고자 한다. 또한 본 연구의 목적으로 감성인식을 숫자화하고, 객관화하는 이미지 감성컴퓨팅 시스템 방식을 적용하고, 사람의 감성과의 유사 정도를 비교한다. 이미지 감성컴퓨팅 시스템의 주요 특징은 감성인식을 숫자화 된 디지털 형식으로 계산한다. 그리고 감성컴퓨팅의 연구배경은 감성을 디지털화하는 James A. Russell의 핵심 효과(Core Affect)를 활용한다. 핵심 감성으로 쾌정도(X축)인 쾌와 불쾌, 긴장도(Y축)인 긴장과 이완의 감성축이고, 감성컴퓨팅 연구에 적용한다. 감성축은 연관된 대표감성으로 아주 기쁜, 흥분, 의기양양, 행복한, 자족, 고요한, 여유로운, 조용한, 피곤한, 무기력한, 우울한, 슬픈, 화가 난, 스트레스, 불안, 긴장된 감성의 16개로 구분하여 감성컴퓨팅에 적용한다. 본 연구의 과정은 이미지 감성컴퓨팅 계산식의 핵심인 색채 요소를 활용하여 색상, 명도, 채도를 감성속성요소로 적용한다. 감성속성요소는 중요도인 가중치를 적용하여 비율을 계산하고, 쾌정도(X축)와 긴장도(Y축)의 감성점수로 측정한다. 다시 교차된 감성점을 바탕으로 감성원으로 확장하고, 포함된 대표감성크기로 상위 5위인 주요대표감성으로 선별한다. 또한 사람의 이미지 감성을 16개 대표감성점수로 측정하고, 상위 5위의 대표감성으로 구분한다. 연구결과 감성컴퓨팅의 주요대표감성과 사람의 감성인식의 주요대표 감성을 비교하여 일치하는 대표감성수에 따라 감성의 유사 정도를 검증한다. 감성컴퓨팅 유사성 실험 결과 주요대표감성의 평균 일치율은 51%이고, 2.5개의 대표감성이 사람의 감성인식과 일치했다. 본 연구를 통해 감성컴퓨팅 계산 방식과 사람 감성인식의 유사 정도를 측정했고, 감성계산식의 객관적인 평가기준을 제시했다. 향후 연구에서는 좀 더 높은 일치율 향상의 방안과 감성계산식의 가중치 연구가 유지되어야 할 것이다. Emotional awareness is the image of a person is represented by different tendencies. Currently, the emotion computing to objectively evaluate the emotion recognition research is being actively studied. However, existing emotional computing research has many problems to run. First, the non-objective in emotion recognition if it is inaccurate. Second, the correlation between the emotion recognition is unclear points. So to test the regularity of image sensitivity to the need of the present study is to control emotions in the computing system. In addition, the screen number of the emotion recognized for the purpose of this study, applying the method of objective image emotional computing system and compared with a similar degree of emotion of the person. The key features of the image emotional computing system calculates the emotion recognized as numbered digital form. And to study the background of emotion computing is a key advantage of the effect of the James A. Russell for digitization of emotion (Core Affect). Pleasure emotions about the core axis (X axis) of pleasure and displeasure, tension (Y-axis) axis of tension and relaxation of emotion, emotion is applied to the computing research. Emotional axis with associated representative sensibility very happy, excited, elated, happy, contentment, calm, relaxing, quiet, tired, helpless, depressed, sad, angry, stress, anxiety, pieces 16 of tense emotional separated by a sensibility ComputingIt applies. Course of the present study is to use the color of the color key elements of the image computing formula sensitivity, brightness, and saturation applied to the sensitivity property elements. Property and calculating the rate sensitivity factors are applied to the importance weight, measured by free-level sensitivity score (X-axis) and the tension (Y-axis). Emotion won again expanded on the basis of emotion crossed point, and included a representative selection in Sensibility size of the top five ranking representative of the main emotion. In addition, measuring the emotional image of a person with 16 representative emotional score, and separated by a representative of the top five senses. Compare the main representative of the main representatives of Emotion and Sensibility people aware of the sensitivity of the results to verify the similarity degree computing emotion emotional emotions depending on the number of representative matches. The emotional similarity computing results represent the average concordance rate of major sensitivity was 51%, representing 2.5 sensibilities were consistent with the person"s emotion recognition. Similar measures were the degree of emotion computing calculation and emotion recognition in this study who were given the objective criteria of the sensitivity calculation. Future research will need to be maintained weight room and the study of the emotional equation of a higher concordance rate improved.

      • KCI등재후보

        마음챙김 요가가 신체자각 및 정서자각, 정서표현, 정서조절에 미치는 영향

        장진아(Jin-a Jang) 한국명상학회 2018 한국명상학회지 Vol.8 No.2

        This study examined the effects that mindful yoga, which puts major emphasis on body awareness, exerted on body awareness and emotions. Study was an experimental research designed to see what effects mindful yoga had on body awareness, emotion recognition, emotion expression, and emotion regulation. The experimental group underwent mindful yoga for 8 weeks (twice a week, 70 minutes per session), and the control group was composed of the participants who took general yoga programs (twice a week, 60 minutes per session) at community center. The study analyzed the data from 34 persons (17 from experimental group and 17 from control group). For measurement, K-MAIA(Kim, Sim, & Cho, 2016), Trait Meta-Mood Scale(Lee & Lee, 1997), Emotional Expressiveness Scale(Han, 1997), and Emotional Regulation Scale(Im & Jang, 2003) were adopted. As a result, when compared to the control group, the experimental group which performed mindful yoga showed significant increase on the boundary in body awareness, emotion recognition, and emotion expression. As for subfactors, there was a significant increase in noticing subscale and mind-body connection awareness among subfactors of body awareness, and in the emotional perception subscale among subfactors of emotion recognition. And active way and support-seeking way among subfactors of emotion regulation increased significantly on the boundary. The study verified that mindful yoga could boost body awareness and had a positive effect on emotion recognition, emotion expression, and emotion regulation. 본 연구는 마음챙김 요가가 신체자각과 정서 기제에 어떠한 효과가 있는지를 탐구하기 위한 것이다. 마음챙김 요가가 신체자각에 긍정적인 영향을 미치는지를 검증하고, 정서관련 척도인 정서자각, 정서표현, 정서조절에도 긍정적인 영향을 미치는지를 검증하였다. 실험집단에는 8주간(주 2회, 70분/회)의 마음챙김 요가를 실시하였으며, 통제집단은 주민자치센터 일반 요가(주 2회, 60분/회) 참여자들을 대상으로 하였다. 중도탈락 인원 및 불성실한 응답자를 제외하고 총 34명(실험 17명, 비교 17명)의 데이터를 분석하였다. 연구 측정도구로는 신체자각(K-MAIA), 정서자각(TMMS), 정서표현(EES), 정서조절(ERS) 척도를 사용하여 사전과 사후에 측정하였다. 분석 결과 실험집단은 통제집단에 비해 신체자각, 정서자각, 정서표현이 경계선 상에서 유의미하게 증가하였다. 하위요인별로는 신체자각 하위요인 중 감각자각과 심신연결성자각, 정서자각의 하위요인 중 정서에 대한 명확한 인식이 유의미하게 증가하였고 정서조절의 하위요인 중에서는 능동적 양식과 지지추구적 양식이 경계선 상에서 유의미하게 증가하였다. 결론적으로 본 연구는 신체자각에 중점을 두는 마음챙김 요가가 신체자각의 증진 뿐 아니라 정서에 긍정적인 영향을 미침을 보여주고 있다.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼