|国家预印本平台
首页|LongVILA: Scaling Long-Context Visual Language Models for Long Videos

LongVILA: Scaling Long-Context Visual Language Models for Long Videos

LongVILA: Scaling Long-Context Visual Language Models for Long Videos

来源:Arxiv_logoArxiv
英文摘要

Long-context capability is critical for multi-modal foundation models, especially for long video understanding. We introduce LongVILA, a full-stack solution for long-context visual-language models by co-designing the algorithm and system. For model training, we upgrade existing VLMs to support long video understanding by incorporating two additional stages, i.e., long context extension and long video supervised fine-tuning. However, training on long video is computationally and memory intensive. We introduce the long-context Multi-Modal Sequence Parallelism (MM-SP) system that efficiently parallelizes long video training and inference, enabling 2M context length training on 256 GPUs without any gradient checkpointing. LongVILA efficiently extends the number of video frames of VILA from 8 to 2048, achieving 99.8% accuracy in 6,000-frame (more than 1 million tokens) video needle-in-a-haystack. LongVILA-7B demonstrates strong accuracy on 9 popular video benchmarks, e.g. 65.1% VideoMME with subtitle. Besides, MM-SP is 2.1x - 5.7x faster than ring style sequence parallelism and 1.1x - 1.4x faster than Megatron with a hybrid context and tensor parallelism. Moreover, it seamlessly integrates with Hugging Face Transformers.

Yuke Zhu、Yunhao Fang、Hongxu Yin、Xiuyu Li、Jan Kautz、Zhijian Liu、Fuzhao Xue、Yao Lu、Ethan He、Linxi Fan、Ligeng Zhu、Haotian Tang、Dacheng Li、Song Han、Shang Yang、Pavlo Molchanov、Yukang Chen、Qinghao Hu

计算技术、计算机技术

Yuke Zhu,Yunhao Fang,Hongxu Yin,Xiuyu Li,Jan Kautz,Zhijian Liu,Fuzhao Xue,Yao Lu,Ethan He,Linxi Fan,Ligeng Zhu,Haotian Tang,Dacheng Li,Song Han,Shang Yang,Pavlo Molchanov,Yukang Chen,Qinghao Hu.LongVILA: Scaling Long-Context Visual Language Models for Long Videos[EB/OL].(2024-08-19)[2025-08-02].https://arxiv.org/abs/2408.10188.点此复制

评论