RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Continuous hand gesture recognition based on trajectory shape information

        Yang, Cheoljong,Han, David K.,Ko, Hanseok Elsevier 2017 Pattern recognition letters Vol.99 No.-

        <P><B>Abstract</B></P> <P>In this paper, we propose a continuous hand gesture recognition method based on trajectory shape information. A key issue in recognizing continuous gestures is that performance of conventional recognition algorithms may be lowered by such factors as, unknown start and end points of a gesture or variations in gesture duration. These issues become particularly difficult for those methods that rely on temporal information. To alleviate the issues of continuous gesture recognition, we propose a framework that simultaneously performs both segmentation and recognition. Each component of the framework applies shape-based information to ensure robust performance for gestures with large temporal variation. A gesture trajectory is divided by a set of key frames by thresholding its tangential angular change. Variable-sized trajectory segments are then generated using the selected key frames. For recognition, these trajectory segments are examined to determine whether the segment belongs to a class among intended gestures or a non-gesture class based on fusion of shape information and temporal features. In order to assess performance, the proposed algorithm was evaluated with a database of digit hand gestures. The experimental results indicate that the proposed algorithm has a high recognition rate while maintaining its performance in the presence of continuous gestures.</P> <P><B>Highlights</B></P> <P> <UL> <LI> Simultaneous recognition and segmentation of continuous hand gesture trajectories based on trajectory shape information. </LI> <LI> Generating variable sized candidates for trajectory segments using shape-based key frame extraction. </LI> <LI> Fusion of trajectory shape recognition and temporal feature recognition to stream gesture input. </LI> </UL> </P>

      • KCI등재후보

        바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적/비동시적 입력을 갖는 병렬 상태 기계

        김주창,박정우,김우현,이원형,정명진 한국로봇학회 2013 로봇학회 논문지 Vol.8 No.1

        Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

      • A Dynamic Gesture Trajectory Recognition Based on Key Frame Extraction and HMM

        Zhang Qiu-yu,Lv Lu,Zhang Mo-yi,Duan Hong-xiang,Lu Jun-chi 보안공학연구지원센터 2015 International Journal of Signal Processing, Image Vol.8 No.6

        Aiming at changing high computational complexity, underdeveloped real time, low recognition rate of dynamic gesture recognition algorithms, this paper present a real-time dynamic gesture trajectory recognition method based on key frame extraction and HMM. Key frames are selected without keeping track of all the details of one dynamic gesture, which is based on difference degree between frames. The trajectory data stream, sorted by the time-warping algorithm, is used to construct the Hidden Markov Method model of dynamic gesture. Finally, optimal transition probabilities are employed to implement dynamic gesture recognition. The result of this experiment implies that this method has high robustness and real time. The average recognition rate of dynamic gesture (0~9) is up to 87.67%, and average time efficiency is 0.46s.

      • Dynamic Hand Gesture Trajectory Recognition Based on Block Feature and Skin-Color Clustering

        Zhang Qiu-yu,Lv Lu,Lu Jun-chi,Zhang Mo-yi,Duan Hong-xiang 보안공학연구지원센터 2016 International Journal of Multimedia and Ubiquitous Vol.11 No.12

        In recent years, dynamic hand gesture recognition has been a research hotspot of human-computer interaction. Since most existing algorithms contain problems with high computational complexity, poor real-time performance and low recognition rate, which cannot satisfy the need of many practical applications. Moreover, key frames obtained by inter-frame difference degree algorithm contain less information, which leads to less identified species and lower recognition rate. To solve these problems, we present a dynamic hand gesture trajectory recognition method based on the theory of block feature to extract key frames and the skin-color clustering’s hand gesture segmentation. Firstly, this method extracts block feature of degree of difference between frames in hand gesture sequence to select key frames accurately. Secondly, the method based on skin-color clustering is applied to obtain the area of hand gesture after segmenting hand gestures from images. Finally, hidden Markov model (HMM), in which the angle data of hand gesture trajectories are input, is used for modeling and identifying dynamic hand gestures. Experimental results show that the method of key-frame extraction is used to obtain information of dynamic hand gestures accurately, which would improve the recognition rate of dynamic hand gesture recognition and, at the same time, can guarantee the real-time of hand gesture recognition system. The average recognition rate is up to 86.67%, and the average time efficiency is 0.39s.

      • KCI등재

        CNN-based Gesture Recognition using Motion History Image

        ( Youjin Koh ),( Taewon Kim ),( Min Hong ),( Yoo-joo Choi ) 한국인터넷정보학회 2020 인터넷정보학회논문지 Vol.21 No.5

        In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left,shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 x 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

      • KCI등재

        Dynamic gesture recognition using a model-based temporal self-similarity and its application to taebo gesture recognition

        ( Kyoung-mi Lee ),( Hey-min Won ) 한국인터넷정보학회 2013 KSII Transactions on Internet and Information Syst Vol.7 No.11

        There has been a lot of attention paid recently to analyze dynamic human gestures that vary over time. Most attention to dynamic gestures concerns with spatio-temporal features, as compared to analyzing each frame of gestures separately. For accurate dynamic gesture recognition, motion feature extraction algorithms need to find representative features that uniquely identify time-varying gestures. This paper proposes a new feature-extraction algorithm using temporal self-similarity based on a hierarchical human model. Because a conventional temporal self-similarity method computes a whole movement among the continuous frames, the conventional temporal self-similarity method cannot recognize different gestures with the same amount of movement. The proposed model-based temporal self-similarity method groups body parts of a hierarchical model into several sets and calculates movements for each set. While recognition results can depend on how the sets are made, the best way to find optimal sets is to separate frequently used body parts from less-used body parts. Then, we apply a multiclass support vector machine whose optimization algorithm is based on structural support vector machines. In this paper, the effectiveness of the proposed feature extraction algorithm is demonstrated in an application for taebo gesture recognition. We show that the model-based temporal self-similarity method can overcome the shortcomings of the conventional temporal self-similarity method and the recognition results of the model-based method are superior to that of the conventional method.

      • KCI등재

        A Hand Gesture Recognition Method using Inertial Sensor for Rapid Operation on Embedded Device

        ( Sangyub Lee ),( Jaekyu Lee ),( Hyeonjoong Cho ) 한국인터넷정보학회 2020 KSII Transactions on Internet and Information Syst Vol.14 No.2

        We propose a hand gesture recognition method that is compatible with a head-up display (HUD) including small processing resource. For fast link adaptation with HUD, it is necessary to rapidly process gesture recognition and send the minimum amount of driver hand gesture data from the wearable device. Therefore, we use a method that recognizes each hand gesture with an inertial measurement unit (IMU) sensor based on revised correlation matching. The method of gesture recognition is executed by calculating the correlation between every axis of the acquired data set. By classifying pre-defined gesture values and actions, the proposed method enables rapid recognition. Furthermore, we evaluate the performance of the algorithm, which can be implanted within wearable bands, requiring a minimal process load. The experimental results evaluated the feasibility and effectiveness of our decomposed correlation matching method. Furthermore, we tested the proposed algorithm to confirm the effectiveness of the system using pre-defined gestures of specific motions with a wearable platform device. The experimental results validated the feasibility and effectiveness of the proposed hand gesture recognition system. Despite being based on a very simple concept, the proposed algorithm showed good performance in recognition accuracy.

      • KCI우수등재

        실사용 환경에 적합한 EMG 기반 손동작 인식을 위한 전처리로서의 Wavelet Transform

        조용운,오도창 대한전자공학회 2024 전자공학회논문지 Vol.61 No.4

        손동작이란 기본적인 잡기 동작뿐만 아니라 의사소통을 위한 제스처 등을 포함하며 사람의 일상생활에서 상당히 중요한 역할을 수행한다. 이러한 손동작을 인식하여 HCI(Human Computer Interface) 로 활용하기 위해 표면근전도(sEMG : Surface Electromyography)를 이용한 손동작 인식 분야는 꾸준히 연구되어 왔다. 근전도 신호는 기본적으로 많은 잡음 요소가 존재하기 때문에 손동작을 인식하기 위해 다양한 전처리 신호처리 기법이 개발되었고, 특히 주파수 분석을 위해 웨이블릿 변환(WT: Wavelet Transform)이 자주 사용된다. 따라서 본 논문에서는 웨이블릿 변환 기반의 3가지의 기법을 비교하여 손동작 인식 분야의 실제 사용 환경에 가장 적합한 기법을 선택한다. 손동작 인식 분야의 주된 목표인 실시간 인식을 위해 각 기법에서의 처리시간과 동작 인식 정확도를 비교하였으며, 비교된 웨이블릿 변환 기법은 각각 DWT(Discrete WT, single level), TQWT(Tunable Q-factor WT), CWT(Continuous WT)이다. 비교에 사용된 데이터세트는 서로 다른 5명의 대상에게 수집된 15가지 손동작을 포함하고 있으며, 먼저 각 3가지 기법으로 특징을 추출하고 딥러닝 분류기(가벼운 CNN)를 사용하여 손동작을 인식한다. 손동작 인식이 이루어지는 각 과정에서 처리시간이 기록되고 최종적으로 15가지 손동작에 대한 평균 정확도가 계산된다. 결과 TQWT와 CWT에서 유사한 약 75%의 평균 정확도를 얻었지만 인식 과정에 소요되는 시간이 TQWT에서 0.08초, CWT에서 0.26초였다. 따라서 정확도 측면에서 비슷한 성능을 보이면서 처리시간이 빠른 TQWT가 손동작 인식의 실제 사용 환경에 적합한 기법임을 보여주었다. Hand gestures include not only grasping movements but also gestures for communication and using very important role in daily life. The field of hand gesture recognition using surface electromyography (sEMG) has been studied to recognize these hand movements and use them as HCI (Human Computer Interface). Since electromyography signals basically have many noise elements, various pre-processing techniques have been developed to recognize hand gestures, and Wavelet Transform (WT) is frequently used for frequency analysis. Therefore, in this paper, three pre-processing techniques based on wavelet transform are compared to select the most suitable technique for the actual user-environment in the field of hand gesture recognition. Processing time and gesture recognition accuracy in each technique were compared for real-time recognition, which is the main goal of the field of hand gesture recognition, and the compared wavelet transform techniques are DWT (Discrete WT), TQWT (Tunable Q-factor WT), and CWT (Continuous WT). The dataset used in the comparison contains 15 hand gestures collected from five different subjects, first extracting features with each of the three techniques and recognizing hand gestures using a deep learning classifier (light CNN). In each process in which hand gesture recognition is performed, processing time is recorded, and finally, the average accuracy for 15 hand gestures is calculated. The results obtained similar average accuracy of about 75% in TQWT and CWT, but the time required for the recognition process was 0.08 seconds in TQWT and 0.26 seconds in CWT. Therefore, it was shown that TQWT with fast processing time is a suitable technique for the actual user-environment of hand gesture recognition, showing similar performance in terms of accuracy.

      • KCI등재

        인간 컴퓨터 상호작용 : Dynamic Time Warping 기반의 특징 강조형 제스처 인식 모델

        권혁태 ( Hyuck Tae Kwon ),이석균 ( Suk Kyoon Lee ) 한국정보처리학회 2015 정보처리학회논문지. 소프트웨어 및 데이터 공학 Vol.4 No.3

        스마트 디바이스가 보편화되면서 이에 내장된 가속도 센서를 사용한 제스처의 인식에 관한 연구가 주목받고 있다. 최근 가속도 센서 데이터 시컨스를 통한 제스처 인식에 Dynamic Time Warping(DTW) 기법이 사용되는데, 본 논문에서는 DTW 사용 시 제스처의 인식률을 높이기 위한 특징 강조형 제스처 인식(FsGr) 모델을 제안한다. FsGr 모델은 잘못 인식될 가능성이 높은 유사 제스처들의 집합에 대해 특징이 강조되는 데이터 시컨스의 부분들을 정의하고 이들에 대해 추가적인 DTW를 실행하여 인식률을 높인다. FsGr 모델의 훈련 과정에서는 유사 제스처들의 집합들을 정의하고 유사 제스처들의 특징들을 분석한다. 인식 과정에서는 DTW를 사용한 1차 인식 시도의 결과 제스처가 유사 제스처 집합에 속한 경우, 특징 분석 결과를 기반으로 한 추가적인 인식을 시도하여 인식률을 높인다. 알파베트 소문자에 대한 인식 실험을 통해 FsGr 모델의 성능 평가 결과를 보인다. As smart devices get popular, research on gesture recognition using their embedded-accelerometer draw attention. As Dynamic Time Warping(DTW), recently, has been used to perform gesture recognition on data sequence from accelerometer, in this paper we propose Feature-Strengthened Gesture Recognition(FsGr) Model which can improve the recognition success rate when DTW is used. FsGr model defines feature-strengthened parts of data sequences to similar gestures which might produce unsuccessful recognition, and performs additional DTW on them to improve the recognition rate. In training phase, FsGr model identifies sets of similar gestures, and analyze features of gestures per each set. During recognition phase, it makes additional recognition attempt based on the result of feature analysis to improve the recognition success rate, when the result of first recognition attempt belongs to a set of similar gestures. We present the performance result of FsGr model, by experimenting the recognition of lower case alphabets.

      • SCISCIESCOPUS

        Enhancement of gesture recognition for contactless interface using a personalized classifier in the operating room

        Cho, Yongwon,Lee, Areum,Park, Jongha,Ko, Bemseok,Kim, Namkug Elsevier 2018 COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE Vol.161 No.-

        <P><B>Abstract</B></P> <P><B>Background and objective</B></P> <P>Contactless operating room (OR) interfaces are important for computer-aided surgery, and have been developed to decrease the risk of contamination during surgical procedures.</P> <P><B>Methods</B></P> <P>In this study, we used Leap Motion™, with a personalized automated classifier, to enhance the accuracy of gesture recognition for contactless interfaces. This software was trained and tested on a personal basis that means the training of gesture per a user. We used 30 features including finger and hand data, which were computed, selected, and fed into a multiclass support vector machine (SVM), and Naïve Bayes classifiers and to predict and train five types of gestures including hover, grab, click, one peak, and two peaks.</P> <P><B>Results</B></P> <P>Overall accuracy of the five gestures was 99.58% ± 0.06, and 98.74% ± 3.64 on a personal basis using SVM and Naïve Bayes classifiers, respectively. We compared gesture accuracy across the entire dataset and used SVM and Naïve Bayes classifiers to examine the strength of personal basis training.</P> <P><B>Conclusions</B></P> <P>We developed and enhanced non-contact interfaces with gesture recognition to enhance OR control systems.</P> <P><B>Highlights</B></P> <P> <UL> <LI> The risk of contamination could be decreased during surgical procedures. </LI> <LI> We used Leap Motion™, with a personalized automated classifier, to enhance the accuracy of gesture recognition. </LI> <LI> We used a multiclass support vector machine classifier and Naïve Bayes classifiers to predict and train five types of gestures including hover, grab, click, one peak, and two peak. </LI> <LI> We compared gesture accuracy across the entire dataset to examine the strength of personal basis training. </LI> <LI> We developed and enhanced non-contact interfaces with gesture recognition to enhance OR control systems. </LI> </UL> </P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼