|国家预印本平台
首页|Video Soundtrack Generation by Aligning Emotions and Temporal Boundaries

Video Soundtrack Generation by Aligning Emotions and Temporal Boundaries

Video Soundtrack Generation by Aligning Emotions and Temporal Boundaries

来源:Arxiv_logoArxiv
英文摘要

We introduce EMSYNC, a video-based symbolic music generation model that aligns music with a video's emotional content and temporal boundaries. It follows a two-stage framework, where a pretrained video emotion classifier extracts emotional features, and a conditional music generator produces MIDI sequences guided by both emotional and temporal cues. We introduce boundary offsets, a novel temporal conditioning mechanism that enables the model to anticipate and align musical chords with scene cuts. Unlike existing models, our approach retains event-based encoding, ensuring fine-grained timing control and expressive musical nuances. We also propose a mapping scheme to bridge the video emotion classifier, which produces discrete emotion categories, with the emotion-conditioned MIDI generator, which operates on continuous-valued valence-arousal inputs. In subjective listening tests, EMSYNC outperforms state-of-the-art models across all subjective metrics, for music theory-aware participants as well as the general listeners.

Serkan Sulun、Matthew E. P. Davies、Paula Viana

计算技术、计算机技术

Serkan Sulun,Matthew E. P. Davies,Paula Viana.Video Soundtrack Generation by Aligning Emotions and Temporal Boundaries[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2502.10154.点此复制

评论