|国家预印本平台
首页|Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos

Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos

Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos

来源:Arxiv_logoArxiv
英文摘要

We introduce SWITCH-A-VIEW, a model that learns to automatically select the viewpoint to display at each timepoint when creating a how-to video. The key insight of our approach is how to train such a model from unlabeled -- but human-edited -- video samples. We pose a pretext task that pseudo-labels segments in the training videos for their primary viewpoint (egocentric or exocentric), and then discovers the patterns between the visual and spoken content in a how-to video on the one hand and its view-switch moments on the other hand. Armed with this predictor, our model can be applied to new multi-view video settings for orchestrating which viewpoint should be displayed when, even when such settings come with limited labels. We demonstrate our idea on a variety of real-world videos from HowTo100M and Ego-Exo4D, and rigorously validate its advantages. Project: https://vision.cs.utexas.edu/projects/switch_a_view/.

Ziad Al-Halah、Kristen Grauman、Sagnik Majumder、Tushar Nagarajan

计算技术、计算机技术

Ziad Al-Halah,Kristen Grauman,Sagnik Majumder,Tushar Nagarajan.Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos[EB/OL].(2024-12-24)[2025-07-02].https://arxiv.org/abs/2412.18386.点此复制

评论