RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Multimodal Recognition Method based on Ear and Profile Face Feature Fusion

        Songze Lei,Min Qi 보안공학연구지원센터 2016 International Journal of Signal Processing, Image Vol.9 No.1

        The performance of ear recognition is influenced by pose variation. For the similar position of ear and profile face, a multimodal recognition method is proposed based on the feature fusion of ear and profile face information. A model for ear and profile face feature fusion and recognition is built. The Log-Gabor features of ear and profile face are first extracted separately, and two features are integrated into a combined feature after two Log-Gabor features are standardized. Then combined feature is mapped to kernel space to fuse further, and acquired stronger discriminant feature for classification by kernel Fisher discriminant analysis (KFDA). The minimum distance classifier is finally used in recognition. Experimental results on the profile face database of Notre Dame University show that the fused method improves the recognition rate of pose variation, and the performance of multimodal recognition is better than unimodal recognition using either ear or profile face alone. The method of ear and profile face feature fusion and recognition is effective and robust for the pose variation.

      • Robust Model Construction Using a Selective Feature Vector for Pattern Recognition with Voice

        Jeong-Sik Park,Gil-Jin Jang,Ji-Hwan Kim 보안공학연구지원센터(IJSEIA) 2016 International Journal of Software Engineering and Vol.10 No.1

        This paper proposes a new feature vector selection method for voice pattern recognition tasks, especially for speaker or emotion recognition. During the model training phase, robust speaker or emotion models are constructed by using meaningful feature vectors while discarding confusing vectors that may induce recognition error. To select meaningful feature vectors, the proposed method classifies feature vectors into overlapped and non-overlapped sets using log-likelihood ratio. Speaker- and emotion-recognition experiments confirmed that these robust models significantly reduce recognition errors.

      • KCI등재

        일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법

        김성훈 ( Seong Hoon Kim ),한기태 ( Gi Tae Han ) 한국정보처리학회 2016 정보처리학회논문지. 소프트웨어 및 데이터 공학 Vol.5 No.5

        얼굴 인식은 얼굴 영상에서 특징을 추출하고, 이를 다양한 알고리즘을 통해 학습하여 학습된 데이터와 새로운 얼굴 영상에서의 특징과 비교하여 사람을 인식하는 기술로 인식률을 향상시키기 위해서 다양한 방법들이 요구되는 기술이다. 얼굴 인식을 위해 학습 단계에서는 얼굴 영상들로 부터 특징 성분을 추출해야하며, 이를 위한 기존 얼굴 특징 성분 추출 방법에는 선형판별분석(Linear Discriminant Analysis, LDA)이 있다. 이 방법은 얼굴 영상들을 고차원의 공간에서 점들로 표현하고, 클래스 정보와 점의 분포를 분석하여 사람을 판별하기 위한 특징들을 추출하는데, 점의 위치가 얼굴 영상의 화소값에 의해 결정되므로 얼굴 영상에서 불필요한 영역 또는 변화가 자주 발생하는 영역이 포함되는 경우잘못된 얼굴 특징이 추출될 수 있으며, 특히 일반 카메라 영상을 사용하여 얼굴인식을 수행하는 경우 얼굴과 카메라간의 거리에 따라 얼굴 크기가 다르게 나타나 최종적으로 얼굴 인식률이 저하된다. 따라서 본 논문에서는 이러한 문제점을 해결하기 위해 일반 카메라를 이용하여 얼굴영역을 검출하고, 검출된 얼굴 영역에서 Gabor Filter를 이용하여 계산된 얼굴 외곽선을 통해 불필요한 영역을 제거한 후 일정 크기로 얼굴 영역 크기를 정규화하였다. 정규화된 얼굴 영상을 선형 판별 분석을 통해 얼굴 특징 성분을 추출하고, 인공 신경망을 통해 학습하여 얼굴 인식을 수행한 결과 기존의 불필요 영역이 포함된 얼굴 인식 방법보다 약 13% 정도의 인식률 향상이 가능하였다. Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

      • KCI등재후보

        유전 알고리즘을 이용한 특징 결합과 선택

        이진선 한국콘텐츠학회 2005 한국콘텐츠학회논문지 Vol.5 No.5

        By using a combination of different feature sets extracted from input character patterns, we can improve the character recognition system performance. To reduce the dimensionality of the combined feature vector, we conduct the feature selection. This paper proposes a general framework for the feature combination and selection for character recognition problems. It also presents a specific design for the handwritten numeral recognition. In the design, DDD and AGD feature sets are extracted from handwritten numeral patterns, and a genetic algorithm is used for the feature selection. Experimental result showed a significant accuracy improvement by about 0.7% for the CENPARMI handwrittennumeral database. 문자 패턴에서 추출한 서로 다른 특징 집합을 결합함으로써 문자 인식 시스템의 성능을 향상시킬 수 있다. 이때 결합된 특징 벡터의 차원을 줄이기 위해 특징 선택을 수행해야 한다. 이 논문은 문자 인식 문제에서 특징 결합과 선택을 위한 일반적인 틀을 제시한다. 또한 필기 숫자 인식을 위한 설계와 구현을 제시한다. 이 설계에서는 필기 숫자 패턴에서 DDD 특징 집합과 AGD 특징 집합을 추출하며 특징 선택을 위해 유전 알고리즘을 사용한다. 실험 결과 CENPARMI 필기 숫자 데이터베이스에 대해 0.7%의 정확률 향상을 얻었다

      • SCISCIESCOPUS

        Robust human activity recognition from depth video using spatiotemporal multi-fused features

        Jalal, A.,Kim, Y.H.,Kim, Y.J.,Kamal, S.,Kim, D. Pergamon Press 2017 Pattern recognition Vol.61 No.-

        The recently developed depth imaging technologies have provided new directions for human activity recognition (HAR) without attaching optical markers or any other motion sensors to human body parts. In this paper, we propose novel multi-fused features for online human activity recognition (HAR) system that recognizes human activities from continuous sequences of depth map. The proposed online HAR system segments human depth silhouettes using temporal human motion information as well as it obtains human skeleton joints using spatiotemporal human body information. Then, it extracts the spatiotemporal multi-fused features that concatenate four skeleton joint features and one body shape feature. Skeleton joint features include the torso-based distance feature (DT), the key joint-based distance feature (DK), the spatiotemporal magnitude feature (M) and the spatiotemporal directional angle feature (θ). The body shape feature called HOG-DDS represents the projections of the depth differential silhouettes (DDS) between two consecutive frames onto three orthogonal planes by the histogram of oriented gradients (HOG) format. The size of the proposed spatiotemporal multi-fused feature is reduced by a code vector in the code book which is generated by vector quantization method. Then, it trains the hidden Markov model (HMM) with the code vectors of the multi-fused features and recognizes the segmented human activity by the forward spotting scheme using the trained HMM-based human activity classifiers. The experimental results on three challenging depth video datasets such as IM-DailyDepthActivity, MSRAction3D and MSRDailyActivity3D demonstrate that the proposed online HAR method using the proposed multi-fused features outperforms the state-of-the-art HAR methods in terms of recognition accuracy.

      • KCI등재

        Offline Handwritten Numeral Recognition Using Multiple Features and SVM classifier

        Kim, Gab-Soon,Park, Joong-Jo Institute of Korean Electrical and Electronics Eng 2015 전기전자학회논문지 Vol.19 No.4

        In this paper, we studied the use of the foreground and background features and SVM classifier to improve the accuracy of offline handwritten numeral recognition. The foreground features are two directional features: directional gradient feature by Kirsch operators and directional stroke feature by local shrinking and expanding operations, and the background feature is concavity feature which is extracted from the convex hull of the numeral, where the concavity feature functions as complement to the directional features. During classification of the numeral, these three features are combined to obtain good discrimination power. The efficiency of our scheme is tested by recognition experiments on the handwritten numeral database CENPARMI, where SVM classifier with RBF kernel is used. The experimental results show the usefulness of our scheme and recognition rate of 99.10% is achieved.

      • KCI등재

        PCA와 얼굴방향 정보를 이용한 얼굴인식

        김승재 한국정보전자통신기술학회 2017 한국정보전자통신기술학회논문지 Vol.10 No.6

        In this paper, we propose an algorithm to obtain more stable and high recognition rate by using left and right rotation information of input image in order to obtain a stable recognition rate in face recognition. The proposed algorithm uses the facial image as the input information in the web camera environment to reduce the size of the image and normalize the information about the brightness and color to obtain the improved recognition rate. We apply Principal Component Analysis (PCA) to the detected candidate regions to obtain feature vectors and classify faces. Also, In order to reduce the error rate range of the recognition rate, a set of data with the left and right 45 ° rotation information is constructed considering the directionality of the input face image, and each feature vector is obtained with PCA. In order to obtain a stable recognition rate with the obtained feature vector, it is after scattered in the eigenspace and the final face is recognized by comparing euclidean distant distances to each feature. The PCA-based feature vector is low-dimensional data, but there is no problem in expressing the face, and the recognition speed can be fast because of the small amount of calculation. The method proposed in this paper can improve the safety and accuracy of recognition and recognition rate faster than other algorithms, and can be used for real-time recognition system. 본 논문은 얼굴 인식에 있어 안정적인 인식률을 얻기 위해 입력 영상에 대한 좌우 회전정보를 사용하여 보다 안정적이며 높은 인식률을 내기위한 알고리즘을 제안한다. 제안하는 알고리즘은 웹 카메라 환경에서 얼굴 영상을 입력정보로 사용하여 향상된 인식률을 얻기 위해 영상의 사이즈 축소 및 밝기와 컬러에 대한 정보를 정규화한 후 전처리 과정을 거쳐 얼굴 영역만을 분할 검출한다. 검출된 후보 영역에 대해 주성분분석(PCA)을 적용하여 특징벡터를 구하여 얼굴을 분류한다. 또한 인식률의 오차 범위를 줄이기 위해 입력되는 얼굴 영상에 대한 방향성을 고려하여 좌∙우 45° 회전 정보를 가진 영상을 대상으로 데이터 셋을 구성하여 PCA로 각각의 특징벡터를 구하였다. 구해진 특징벡터로 안정된 인식률을 얻기 위해 고유공간에 뿌린 후 각각의 특징들을 대상으로 유클리디안(euclidean distant) 거리를 비교하여 최종 얼굴을 인식한다. PCA에 의한 특징벡터는 저차원의 데이터이지만 얼굴을 표현하는데 있어 아무런 문제가 없으며 계산량이 적어 인식 속도도 빠를 수 있다. 본 논문에서 제안하는 방법은 기존의 다른 알고리즘에 비해 빠른 인식과 인식률의 안전성과 정확성을 향상시킬 수 있고 실시간 인식 시스템에도 사용할 수 있다.

      • KCI등재

        Offline Handwritten Numeral Recognition Using Multiple Features and SVM classifier

        김갑순,박중조 한국전기전자학회 2015 전기전자학회논문지 Vol.19 No.4

        In this paper, we studied the use of the foreground and background features and SVM classifier to improve the accuracy of offline handwritten numeral recognition. The foreground features are two directional features: directional gradient feature by Kirsch operators and directional stroke feature by local shrinking and expanding operations, and the background feature is concavity feature which is extracted from the convex hull of the numeral, where the concavity feature functions as complement to the directional features. During classification of the numeral, these three features are combined to obtain good discrimination power. The efficiency of our scheme is tested by recognition experiments on the handwritten numeral database CENPARMI, where SVM classifier with RBF kernel is used. The experimental results show the usefulness of our scheme and recognition rate of 99.10% is achieved.

      • KCI등재

        2차원 PCA 얼굴 고유 식별 특성 부분공간 모델 기반 강인한 얼굴 인식

        설태인(Tae in Seol),정선태(Sun-Tae Chung),김상훈(Sanghoon Kim),장언동(Un-Dong Chung),조성원(Seongwon Cho) 大韓電子工學會 2010 電子工學會論文誌-SP (Signal processing) Vol.47 No.1

        고유얼굴 기반 얼굴 인식 방법과 같은 얼굴 형태 기반 얼굴 인식 방법에 사용되는 1차원 PCA는 고차원의 얼굴 형태 데이터 벡터들의 처리로 인하여 부정확한 얼굴 표현과 과도한 계산량을 초래할 수 있다. 이에 개선 방안의 하나로 2차원 PCA 기반 얼굴 인식 방법이 개발되었다. 그러나 단순한 2차원 PCA 적용으로 얻어진 얼굴 표현 모델에는 얼굴 공통 특성 성분과 개인 식별 특성 성분이 모두 포함된다. 얼굴 공통 특성 성분은 오히려 개인 식별 능력을 방해할 수가 있고 또한 인식 처리 시간의 증가를 초래한다. 본 논문에서는 2차원 PCA 적용으로 얻어진 얼굴 특성 공간에서 얼굴 공통 특성 영향이 분리된 얼굴 고유 식별 특성 부분공간 모델을 개발하고 개발된 모델에 기반한 새로운 강인한 얼굴 인식 방법을 제안한다. 제안한 얼굴 고유 식별 특성 부분공간 모델 기반 얼굴 인식 방법은 얼굴 고유 식별 특성에만 주로 의존하기 때문에 기존 1차원 PCA 및 2차원 PCA 기반 얼굴 인식 방법보다 얼굴 인식 성능 및 인식 속도에 대해서 더 우수한 성능을 보인다. 이는 다양한 조명 조건하에 다양한 얼굴 자세를 갖는 얼굴 이미지들로 구성된 Yale A 및 IMM 얼굴 데이터베이스를 이용한 실험을 통해 확인하였다. 1D PCA utilized in the face appearance-based face recognition methods such as eigenface-based face recognition method may lead to less face representative power and more computational cost due to the resulting 1D face appearance data vector of high dimensionality. To resolve such problems of 1D PCA, 2D PCA-based face recognition methods had been developed. However, the face representation model obtained by direct application of 2D PCA to a face image set includes both face common features and face distinctive identity features. Face common features not only prevent face recognizability but also cause more computational cost. In this paper, we first develope a model of a face distinctive identity feature subspace separated from the effects of face common features in the face feature space obtained by application of 2D PCA analysis. Then, a novel robust face recognition based on the face distinctive identity feature subspace model is proposed. The proposed face recognition method based on the face distinctive identity feature subspace shows better performance than the conventional PCA-based methods (1D PCA-based one and 2D PCA-based one) with respect to recognition rate and processing time since it depends only on the face distinctive identity features. This is verified through various experiments using Yale A and IMM face database consisting of face images with various face poses under various illumination conditions.

      • MDP Feature Extraction Technique for Offline Handwritten Gurmukhi Character Recognition

        Munish Kumar,M. K. Jindal,R. K. Sharma 한국산학기술학회 2013 SmartCR Vol.3 No.6

        Character recognition is intricate work because of the various writing styles of different individuals. Most of the published work on handwritten character recognition problems deals with statistical features, and a few works deal with structural features, in general, and Gurmukhi script, in particular. In the present work, we propose a methodology for offline handwritten Gurmukhi character recognition by using a modified division points (MDP) feature extraction technique. We also compare this technique with other recently used feature extraction techniques, namely zoning features, diagonal features, directional features, intersection and open end points features, and transition features. To select a representative set of features is the most significant task for a character recognition system. After feature extraction, the classification stage makes use of the features extracted in the previous stage to recognize the character. In this work, we used linear-support vector machines (linear-SVM), k-nearest neighbor (k-NN), and multilayer perceptron (MLP) classifiers for recognition. For experimental analysis, we used 10,500 samples of the isolated, offline, handwritten, basic 35 akhars of Gurmukhi script. The proposed system achieved a maximum recognition accuracy of 84.57%, 85.85% and 89.20% with linear-SVM, MLP and k-NN classifiers, respectively, with a five-fold cross validation technique.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼