RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재 SCI SCIE SCOPUS

      Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

      한글로보기

      https://www.riss.kr/link?id=A108108337

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massiv...

      Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a “movement polygon.” These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

      더보기

      참고문헌 (Reference)

      1 S. Badar, "Wearable sensors for activity analysis using smo-based random forest over smart home and sports datasets" 2020

      2 M. A. Quaid, "Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm" 79 : 6061-6083, 2020

      3 M. Mahmood, "WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors" 79 : 6919-6950, 2020

      4 김기범 ; Ahmad Jalal ; Maria Mahmood, "Vision‑Based Human Activity Recognition System Using Depth Silhouettes: A Smart Home System for Monitoring the Residents" 대한전기학회 14 (14): 2567-2573, 2019

      5 C. Dhiman, "View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics" 29 : 3835-3844, 2020

      6 T. Singh, "Video benchmarks of human action datasets: A review" 52 : 1107-1154, 2018

      7 L. Shi, "Two-stream adaptive graph convolutional networks for skeleton-based action recognition" 12026-12035, 2019

      8 Y. Tingting, "Three-stage network for age estimation" 4 (4): 122-126, 2019

      9 D. Vaufreydaz, "Starting engagement detection towards a companion robot using multimodal features" 75 : 4-16, 2016

      10 J. Liu, "Spatio-temporal LSTM with trust gates for 3D human action recognition" Springer 9907 : 2016

      1 S. Badar, "Wearable sensors for activity analysis using smo-based random forest over smart home and sports datasets" 2020

      2 M. A. Quaid, "Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm" 79 : 6061-6083, 2020

      3 M. Mahmood, "WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors" 79 : 6919-6950, 2020

      4 김기범 ; Ahmad Jalal ; Maria Mahmood, "Vision‑Based Human Activity Recognition System Using Depth Silhouettes: A Smart Home System for Monitoring the Residents" 대한전기학회 14 (14): 2567-2573, 2019

      5 C. Dhiman, "View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics" 29 : 3835-3844, 2020

      6 T. Singh, "Video benchmarks of human action datasets: A review" 52 : 1107-1154, 2018

      7 L. Shi, "Two-stream adaptive graph convolutional networks for skeleton-based action recognition" 12026-12035, 2019

      8 Y. Tingting, "Three-stage network for age estimation" 4 (4): 122-126, 2019

      9 D. Vaufreydaz, "Starting engagement detection towards a companion robot using multimodal features" 75 : 4-16, 2016

      10 J. Liu, "Spatio-temporal LSTM with trust gates for 3D human action recognition" Springer 9907 : 2016

      11 J. Liu, "Skeleton-based human action recognition with global context-aware attention LSTM networks" 27 (27): 1586-1599, 2018

      12 A. Jalal, "Shape and motion features approach for activity tracking and recognition from kinect video camera" 2015

      13 A. Jalal, "Robust human activity recognition from depth video using spatiotemporal multi-fused features" 61 : 295-308, 2017

      14 P. Foggia, "Recognizing human actions by a bag of visual words" 2910-2915, 2013

      15 A. Jalal, "Real-time life logging via a depth silhouette-based human activity recognition system for smart home services" 74-80, 2014

      16 J. Shotton, "Real-time human pose recognition in parts from single depth images" 1297-1304, 2011

      17 H. Rahmani, "Real time human action recognition using histograms of depth gradients and random decision forests" 2014

      18 A. Ahmed, "RGB-D images for object segmentation, localization and recognition in indoor scenes using feature descriptor and hough voting" 6061-6083, 2020

      19 MATLAB, "R2019b (Version 9.7)"

      20 S. Gasparrini, "Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion" Springer 399 : 99-108, 2016

      21 S. Susan, "New shape descriptor in the context of edge continuity" 4 (4): 101-109, 2019

      22 A. Shahroudy, "NTU R GB+D: A large scale dataset for 3D human activity analysis" 1010-1019, 2016

      23 M. Li, "Multiview skeletal interaction recognition using active joint interaction graph" 18 (18): 2293-2302, 2016

      24 Y. Guo, "Multiview cauchy estimator feature embedding for depth and inertial sensor-based human action recognition" 47 (47): 617-627, 2017

      25 J. Wang, "Mining actionlet ensemble for action recognition with depth cameras" 1290-1297, 2012

      26 S. Hwang, "Maximizing accuracy of fall detection and alert systems based on 3D convolutional neural network: Poster abstract" 343-344, 2017

      27 C. Zhuang, "Markov blanket based sequential data feature selection for human motion recognition" 2059-2064, 2015

      28 W. Peng, "Learning graph convolutional network for skeleton-based human action recognition by neural searching" 34 (34): 2669-2676, 2020

      29 C. Li, "Joint distance maps based action recognition with convolutional neural networks" 24 (24): 624-628, 2017

      30 C. Zhu, "Influence of kernel clustering on an RBFN" 4 (4): 255-260, 2019

      31 C. Chen, "Improving human action recognition using fusion of depth camera and inertial sensors" 45 (45): 51-61, 2015

      32 A. Jalal, "Human depth sensors-based activity recognition using spatiotemporal features and hidden Markov model for smart environments" 2016 : 1-11, 2016

      33 A. Jalal, "Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart home" 22 (22): 271-279, 2013

      34 J. K. Aggarwal, "Human activity recognition from 3D data: A review" 48 : 70-80, 2014

      35 D. K. Vishwakarma, "Human activity recognition based on spatial distribution of gradients at sublevels of average energy silhouette images" 9 (9): 316-327, 2017

      36 A. Nadeem, "Human actions tracking and recognition based on body parts detection via artificial neural network" 2020

      37 T. Xu, "Fall prediction based on biomechanics equilibrium using Kinect" 13 (13): 1-9, 2017

      38 T. Wiens, "Engine speed reduction for hydraulic machinery using predictive algorithms" 2 (2): 16-31, 2019

      39 X. Yang, "EigenJoints-based action recognition using naive Bayes nearest neighbor" 14-19, 2012

      40 X. Cai, "Effective active skeleton representation for low latency human action recognition" 18 (18): 141-154, 2016

      41 A. Jalal, "Depth silhouettes context: A new robust feature for human tracking and activity recognition based on embedded HMMs" 2015

      42 A. Jalal, "Depth map-based human activity tracking and recognition using body joints features and Self-Organized Map" 33044-, 2014

      43 Ahmad Jalal ; Shaharyar Kamal ; Daijin Kim, "Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM" 대한전기학회 11 (11): 1857-1862, 2016

      44 A. Jalal, "Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data" 119-124, 2014

      45 Adnan Farooq ; Ahmad Jalal ; Shaharyar Kamal, "Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map" 한국인터넷정보학회 9 (9): 1856-1869, 2015

      46 Y. Tang, "Deep progressive reinforcement learning for skeleton-based action recognition" 5323-5332, 2018

      47 F. Ofli, "Berkeley MHAD: A comprehensive multimodal human action database" 2013

      48 S. Osterland, "Analytical analysis of single-stage pressure relief valves" 2 (2): 32-53, 2019

      49 H. Xu, "Activity recognition using Eigenjoints based on HMM" 300-305, 2015

      50 G. Chen, "Action recognition using ensemble weighted multi-instance learning" 4520-4525, 2014

      51 C. Chen, "Action recognition from depth sequences using depth motion maps-based local binary patterns" 1092-1099, 2015

      52 W. Li, "Action recognition based on a bag of 3D points" 2010

      53 D. K. Vishwakarma, "A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel" 35 : 1595-1613, 2019

      54 D. K. Vishwakarma, "A two-fold transformation model for human action recognition using decisive pose" 61 : 1-13, 2020

      55 M. Shokri, "A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure" 2 (2): 178-196, 2019

      56 C. Dhiman, "A review of state-of-the-art techniques for abnormal human activity recognition" 77 : 21-45, 2018

      57 S. Kamal, "A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors" 41 : 1043-1051, 2016

      58 A. Manzi, "A human activity recognition system based on dynamic clustering of skeleton data" 17 (17): 1-14, 2017

      59 A. Jalal, "A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments" 14 (14): 11735-11759, 2014

      60 Ahmad Jalal ; Majid Ali Khan Quaid ; 김기범, "A Wrist Worn Acceleration Based Human Motion Analysis and Classifi cation for Ambient Smart Home System" 대한전기학회 14 (14): 1733-1739, 2019

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2023 평가예정 해외DB학술지평가 신청대상 (해외등재 학술지 평가)
      2020-01-01 평가 등재학술지 유지 (해외등재 학술지 평가) KCI등재
      2005-09-27 학술지등록 한글명 : ETRI Journal
      외국어명 : ETRI Journal
      KCI등재
      2003-01-01 평가 SCI 등재 (신규평가) KCI등재
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.78 0.28 0.57
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.47 0.42 0.4 0.06
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼