http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
조경원,남경원,이준창,홍성화,See Youn Kwon,Jonghee Han,Dongwook Kim,이상민,김인영 대한의용생체공학회 2014 Biomedical Engineering Letters (BMEL) Vol.4 No.2
Purpose Recently, some research groups have suggested thepossibility of using the broadband beamformer (BBF)algorithm for hearing aid applications. However, there havebeen no previous reports to have quantitatively evaluatedthe relative performance between conventional differentialmicrophone (DM)-based frequency-invariant beamformingalgorithms and the broadband beamformer. Methods In this study, we evaluated the performance ofDM-based beamformer algorithms and the BBF algorithm invitro using the four objective indices of signal-to-noise ratio(SNR), perceptual evaluation of speech quality (PESQ),noise distortion (Cbak) and weighted spectral slope (WSS) ina non-reverberant environment. Results The experimental results showed that the DM-basedalgorithms were superior in terms of SNR, WSS and Cbak,and the BBF algorithm was superior in terms of PESQ. Conclusions Considering the limited performance of hearingaid processors and the experimental results, DM-basedfrequency-invariant algorithms with a first-order compensation filter are more feasible for real hearing aids. However,additional in vitro and clinical evaluations are required to moreaccurately verify the clinical feasibility of these algorithms.
김인영,김희평,한종희,Sun I Kim,See Youn Kwon,Sung Hwa Hong,이상민,김동욱 대한의용생체공학회 2011 Biomedical Engineering Letters (BMEL) Vol.1 No.2
Purpose Speech perception in noise is one of most important factors which people with hearing loss desire for better hearing. This study aims to verify the effectiveness of sound training for speech perception enhancement in background noise. Methods In our experiments, persons with normal hearing listened sounds coming through a hearing loss simulator to make them experience hearing loss virtually. In the sound training, we used the spectral ripple noise that is highly correlated to the sensitivity of speech perception in quit and noise with normal hearing person, hearing impaired person,hearing aid user, and cochlear implant user. Fourteen normalhearing subjects participated in this study. To investigate the effect of the sound training, we divided the subjects into 2group “Training group”, “Non Training group”. Each group consists of 7 normal hearing persons (Training group: male-6, female-1, Non Training group: male-5, female-2). Results The effectiveness of sound training was evaluated by the threshold of spectral resolution discrimination and the threshold of Speech perception. It was also statistically analyzed by Wilcoxon Signed test (*p<0.05). In training group, spectral resolution has improved from 8.6 ripple per octave (RPO) to 13.6 RPO. Speech perception in white noise has improved from -4.6 dB to -7.7 dB. In addition, speech perception in babble noise has improved from -4.3 dB to -7.4dB. The results were statistically significant in the training group. On the other hand, non-training group improved spectral resolution from 8 RPO to 8.4 RPO, but this result did not show statistical significance. Also speech perception in both babble and white noise did not show statistical significance. Conclusions Our results suggest that the perceptual improvement of spectral-component dissolving is significantly reflects to the speech perception in noise.
인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘
서상완,육순현,남경원,한종희,권세윤,홍성화,김동욱,이상민,장동표,김인영,Seo, Sangwan,Yook, Sunhyun,Nam, Kyoung Won,Han, Jonghee,Kwon, See Youn,Hong, Sung Hwa,Kim, Dongwook,Lee, Sangmin,Jang, Dong Pyo,Kim, In Young 대한의용생체공학회 2013 의공학회지 Vol.34 No.1
Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.