http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
청각장애인을 위한 웨어러블 기기의 위험소리 검출 엔진 설계
변성우(Sung-Woo Byun),이석필(Soek-Pil Lee) 대한전기학회 2016 전기학회논문지 Vol.65 No.7
Hearing impaired persons are exposed to the danger since they can"t be aware of ㅡmany dangerous situations like fire alarms, car hones and so on. Therefore they need haptic or visual informations when they meet dangerous situations. In this paper, we design a dangerous sound detection engine for hearing impaired. We consider four dangerous indoor situations such as a boiled sound of kettle, a fire alarm, a door bell and a phone ringing. For outdoor, two dangerous situations such as a car horn and a siren of emergency vehicle are considered. For a test, 6 data sets are collected from those six situations. we extract LPC, LPCC and MFCC as feature vectors from the collected data and compare the vectors for feasibility. Finally we design a matching engine using an artificial neural network and perform classification tests. We perform classification tests for 3 times considering the use outdoors and indoors. The test result shows the feasibility for the dangerous sound detection.
확률변수를 이용한 음악에 따른 감정분석에의 최적 EEG 채널 선택
변성우(Sung-Woo Byun),이소민(So-Min Lee),이석필(Seok-Pil Lee) 대한전기학회 2013 전기학회논문지 Vol.62 No.11
Recently, researches on analyzing relationship between the state of emotion and musical stimuli are increasing. In many previous works, data sets from all extracted channels are used for pattern classification. But these methods have problems in computational complexity and inaccuracy. This paper proposes a selection of optimal EEG channel to reflect the state of emotion efficiently according to music listening by analyzing stochastic feature vectors. This makes EEG pattern classification relatively simple by reducing the number of dataset to process.
IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식
변성우(Sung-Woo Byun),이석필(Seok-Pil Lee) 대한전기학회 2016 전기학회논문지 Vol.65 No.1
In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.