<P><B>Abstract</B></P> <P>Despite impressive achievements in image processing and artificial intelligence in the past decade, understanding video-based action remains a challenge. However, the intensive development of 3D...
http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
https://www.riss.kr/link?id=A107454317
-
2018
-
SCI,SCIE,SCOPUS
학술저널
20-35(16쪽)
0
상세조회0
다운로드다국어 초록 (Multilingual Abstract)
<P><B>Abstract</B></P> <P>Despite impressive achievements in image processing and artificial intelligence in the past decade, understanding video-based action remains a challenge. However, the intensive development of 3D...
<P><B>Abstract</B></P> <P>Despite impressive achievements in image processing and artificial intelligence in the past decade, understanding video-based action remains a challenge. However, the intensive development of 3D computer vision in recent years has brought more potential research opportunities in pose-based action detection and recognition. Thanks to the advantages of depth camera devices like the Microsoft Kinect sensor, we developed an effective approach to in-depth analysis of indoor actions using skeleton information, in which skeleton-based feature extraction and topic model-based learning are two major contributions. Geometric features, i.e. joint distance, joint angle, and joint-plane distance are calculated in the spatio-temporal dimension. These features are merged into two types, called pose and transition features, and then are provided to codebook construction to convert sparse features into visual words by <I>k</I>-means clustering. An efficient hierarchical model is developed to describe the full correlation of feature - poselet - action based on Pachinko Allocation Model. This model has the potential to uncover more hidden poselets, which have been recognized as the valuable information and help to differentiate pose-sharing actions. The experimental results on several well-known datasets, such as MSR Action 3D, MSR Daily Activity 3D, Florence 3D Action, UTKinect-Action 3D, and NTU RGB+D Action Recognition, demonstrate the high recognition accuracy of the proposed method. Our method outperforms state-of-the-art methods in the field in most dataset benchmarks.</P> <P><B>Highlights</B></P> <P> <UL> <LI> 3D action recognition approach using topic modeling technique. </LI> <LI> Pose and transition feature for object posture and movement representation. </LI> <LI> A flexible hierarchical topic model to learn feature-poselet-action correlation. </LI> <LI> Method sensitivity evaluation on five well-known 3D action recognition datasets. </LI> <LI> Accuracy improvement to other existing methods which only use 3D skeleton data. </LI> </UL> </P>