Skeleton-based Activity Recognition with Local Order Preserving Match of Linear Patches

1 Nov 2018  ·  Yaqiang Yao, Yan Liu, Huanhuan Chen ·

Human activity recognition has drawn considerable attention recently in the field of computer vision due to the development of commodity depth cameras, by which the human activity is represented as a sequence of 3D skeleton postures. Assuming human body 3D joint locations of an activity lie on a manifold, the problem of recognizing human activity is formulated as the computation of activity manifold-manifold distance (AMMD). In this paper, we first design an efficient division method to decompose a manifold into ordered continuous maximal linear patches (CMLPs) that denote meaningful action snippets of the action sequence. Then the CMLP is represented by its position (average value of points) and the first principal component, which specify the major posture and main evolving direction of an action snippet, respectively. Finally, we compute the distance between CMLPs by taking both the posture and direction into consideration. Based on these preparations, an intuitive distance measure that preserves the local order of action snippets is proposed to compute AMMD. The performance on two benchmark datasets demonstrates the effectiveness of the proposed approach.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here