RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
        • 주제분류
        • 발행연도
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • A Survey on Synthetic Data Generation Approaches

        Udurume Miracle(미라클),Angela Caliwag(안젤라),Wansu Lim(임완수) 한국통신학회 2021 한국통신학회 학술대회논문집 Vol.2021 No.6

        Experiments based synthetic data has proven to be very effective in simulation models, scientific studies and research purpose. Using several approaches, flexibility and accuracy can be obtained. Synthetic data generation is very useful in cases of limited data availability. This includes cases at which data is available but protected under certain confidentiality by private data collectors and not made available for research purposes. This paper presents a brief survey on synthetic data generation and states some approaches that can be used to carry out synthetic data generation.

      • KCI등재

        Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

        Miracle Udurume,Angela Caliwag,임완수,김귀곤 한국정보통신학회 2022 Journal of information and communication convergen Vol.20 No.3

        Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and

      • KCI등재

        Real-time Multimodal Emotion Recognition Based on Multithreaded Weighted Average Fusion

        Miracle Udurume,Erick C. Valverde,Angela Caliwag,김상호,임완수 대한인간공학회 2023 大韓人間工學會誌 Vol.42 No.5

        Objective: The previous study explored the use of multimodality for accurate emotion predictions. However, limited research has addressed real-time implementation due to the challenges of simultaneous emotion recognition. To tackle this issue, we propose a real-time multimodal emotion recognition system based on multithreaded weighted average fusion. Background: Emotion recognition stands as a crucial component in human-machine interaction. Challenges arise in emotion recognition due to the diverse expressions of emotions across various forms such as visual cues, auditory signals, text, and physiological responses. Recent advances in the field highlight that combining multimodal inputs, such as voice, speech, and EEG signals, yields superior results compared to unimodal approaches. Method: We have constructed a multithreaded system to facilitate the simultaneous utilization of diverse modalities, ensuring continuous synchronization. Building upon previous work, we have enhanced our approach by incorporating weighted average fusion alongside the multithreaded system. This enhancement allows us to predict emotions based on the highest probability score. Results: Our implementation demonstrated the ability of the proposed model to recognize and predict user emotions in real-time, resulting in improved accuracy in emotion recognition. Conclusion: This technology has the potential to enrich user experiences and applications by enabling real-time understanding and response to human emotions. Application: The proposed real-time multimodal emotion recognition system holds promising applications in various domains, including human-computer interaction, healthcare, and entertainment.

      • KCI등재

        Synthetic Data Generation Using GAN for RUL Prediction of Supercapacitors

        Miracle Udurume,Chigozie Uzochukwu Udeogu,안젤라,임완수 한국통신학회 2022 韓國通信學會論文誌 Vol.47 No.3

        The remaining useful life (RUL) prediction of supercapacitors is an important part of supercapacitors management system. To accurately predict the RUL of supercapacitor, a large amount of capacity data is required which can be difficult to acquire due to privacy restrictions and limited access. Previous works have employed the use of deep learning models to synthetically generate data. However, a prerequisite ensuring the success of these models depends on their ability to preserve the temporal dynamics of the data. This paper presents a generative adversarial network (GAN) for synthetic data generation and a long short-term memory (LSTM) network for accurate RUL prediction. Firstly, the GAN model is employed for synthetic data generation and LSTM for RUL prediction. We show that the GAN model is capable of preserving the temporal dynamics of the original data and also prove that the generated data can be used to accurately carry out RUL prediction. Our proposed GAN model was able to achieve an accuracy of 85% after 500 epochs. The performance of the generated data set with the LSTM model achieved an RMSE of 0.29. The overall results show that synthetic data can be used to achieve excellent performance for RUL prediction.

      • Generative Adversarial Network with Face Alignment for Face Generation

        Adib Kamali,Udurume Miracle,Udeogu Chigozie Uzochukwu,Angela Caliwag,Wansu Lim 한국통신학회 2021 한국통신학회 학술대회논문집 Vol.2021 No.11

        Face generation is extensively conducted to increase the number of face images dataset. In face generation field, Generative Adversarial Network (GAN) have shown remarkable success in face image generations. However, most of the existing methods only generate face images from random noise, and cannot generate faces according to face alignment. This make GAN produce poor quality face images when using unaligned face image. In this paper, we propose face generation based on GAN with considering the face alignments. In detail, original face images which is not always aligned is fed to the face alignment module. Then the aligned face images is added noises. The aligned images with noise are then used as input for GAN based image generator. Generator and discriminator are trained to optimize the face generation model performance. Based on extensive experimental study, we present the analysis on face alignment and face generation result with and without considering face alignment.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼