Learning Activity View-invariance Under Extreme Viewpoint Changes via Curriculum Knowledge Distillation
Learning Activity View-invariance Under Extreme Viewpoint Changes via Curriculum Knowledge Distillation
Traditional methods for view-invariant learning from video rely on controlled multi-view settings with minimal scene clutter. However, they struggle with in-the-wild videos that exhibit extreme viewpoint differences and share little visual content. We introduce a method for learning rich video representations in the presence of such severe view-occlusions. We first define a geometry-based metric that ranks views at a fine-grained temporal scale by their likely occlusion level. Then, using those rankings, we formulate a knowledge distillation objective that preserves action-centric semantics with a novel curriculum learning procedure that pairs incrementally more challenging views over time, thereby allowing smooth adaptation to extreme viewpoint differences. We evaluate our approach on two tasks, outperforming SOTA models on both temporal keystep grounding and fine-grained keystep recognition benchmarks - particularly on views that exhibit severe occlusion.
Arjun Somayazulu、Efi Mavroudi、Changan Chen、Lorenzo Torresani、Kristen Grauman
计算技术、计算机技术
Arjun Somayazulu,Efi Mavroudi,Changan Chen,Lorenzo Torresani,Kristen Grauman.Learning Activity View-invariance Under Extreme Viewpoint Changes via Curriculum Knowledge Distillation[EB/OL].(2025-04-07)[2025-05-28].https://arxiv.org/abs/2504.05451.点此复制
评论