http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Progression-Preserving Dimension Reduction for High-Dimensional Sensor Data Visualization
윤현진,Cyrus Shahabi,Carolee J. Winstein,강종현 한국전자통신연구원 2013 ETRI Journal Vol.35 No.5
This letter presents Progression-Preserving Projection, a dimension reduction technique that finds a linear projection that maps a high-dimensional sensor dataset into a two- or three-dimensional subspace with a particularly useful property for visual exploration. As a demonstration of its effectiveness as a visual exploration and diagnostic means, we empirically evaluate the proposed technique over a dataset acquired from our own virtual-reality-enhanced ball-intercepting training system designed to promote the upper extremity movement skills of individuals recovering from stroke-related hemiparesis.
전지구 예보모델의 대기-해양 약한 결합자료동화 활용성에 대한 연구
윤현진,박혜선,김범수,박정현,임정옥,부경온,강현석 한국기상학회 2019 대기 Vol.29 No.2
Generally, the weather forecast system has been run using prescribed ocean condition. As it is widely known that coupling between atmosphere and ocean process produces consistent initial condition at all-time scales to improve forecast skill, there are many trials on the application of data assimilation of coupled model. In this study, we implemented a weakly coupled data assimilation (short for WCDA) system in global NWP model with low horizontal resolution for coupled forecast with uncoupled initialization, following WCDA system at the Met Office. The experiment is carried out for a typhoon evolution forecast in 2017. Air-sea exchange process provides SST cooling and gives a substantial impact on tendency of central pressure changes in the decaying phase of the typhoon, except the underestimated central pressure. Coupled data assimilation is a challenging new area, requiring further work, but it would offer the potential for improving air-sea feedback process on NWP timescales and finally contributing forecast accuracy.
위 내용물 및 지방산 구성을 통한 황해 모악류(Sagitta crassa와 S. nagae)의 먹이 섭식 특성
윤현진,고아라,강정훈,최중기,주세종 한국해양과학기술원 2016 Ocean and Polar Research Vol.38 No.1
To understand the diet of chaetognaths, the gut content and fatty acid trophic makers (FATMs) of Sagitta crassa and S. nagae, which are the most predominant species of chaetognath in the Yellow Sea, were analyzed. Gut contents of the two species examined by microscopic analysis revealed that copepods are the major components of the diet (> 70% of gut contents) and there was no significant changes in the gut contents of two species collected in spring and summer season. Although 16:0, 20:5(n-3) (Eicosapentaenoic acid) and 22:6(n-3) (Docosahexanoic acid), which are known as phytoplankton FA markers, were the most dominant among the fatty acids in both chaetognath species, the detection of copepod FA markers, 20:1(n-9) (Gadoleic acid) and 22:1(n-11) (Cetoleic acid), provided evidence that their food sources include copepods. These results suggest that S. crassa and S. nagae are carnivores and mainly feed on copepods in the Yellow Sea.
Improved Two-Phase Framework for Facial Emotion Recognition
윤현진,박상욱,이용귀,한미경,장종현 한국전자통신연구원 2015 ETRI Journal Vol.37 No.6
Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components — multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.