UTAL-GNN: Unsupervised Temporal Action Localization using Graph Neural Networks
UTAL-GNN: Unsupervised Temporal Action Localization using Graph Neural Networks
Fine-grained action localization in untrimmed sports videos presents a significant challenge due to rapid and subtle motion transitions over short durations. Existing supervised and weakly supervised solutions often rely on extensive annotated datasets and high-capacity models, making them computationally intensive and less adaptable to real-world scenarios. In this work, we introduce a lightweight and unsupervised skeleton-based action localization pipeline that leverages spatio-temporal graph neural representations. Our approach pre-trains an Attention-based Spatio-Temporal Graph Convolutional Network (ASTGCN) on a pose-sequence denoising task with blockwise partitions, enabling it to learn intrinsic motion dynamics without any manual labeling. At inference, we define a novel Action Dynamics Metric (ADM), computed directly from low-dimensional ASTGCN embeddings, which detects motion boundaries by identifying inflection points in its curvature profile. Our method achieves a mean Average Precision (mAP) of 82.66% and average localization latency of 29.09 ms on the DSV Diving dataset, matching state-of-the-art supervised performance while maintaining computational efficiency. Furthermore, it generalizes robustly to unseen, in-the-wild diving footage without retraining, demonstrating its practical applicability for lightweight, real-time action analysis systems in embedded or dynamic environments.
Bikash Kumar Badatya、Vipul Baghel、Ravi Hegde
计算技术、计算机技术体育
Bikash Kumar Badatya,Vipul Baghel,Ravi Hegde.UTAL-GNN: Unsupervised Temporal Action Localization using Graph Neural Networks[EB/OL].(2025-08-27)[2025-09-06].https://arxiv.org/abs/2508.19647.点此复制
评论