|国家预印本平台
首页|A Real-Time Gesture-Based Control Framework

A Real-Time Gesture-Based Control Framework

A Real-Time Gesture-Based Control Framework

来源:Arxiv_logoArxiv
英文摘要

We introduce a real-time, human-in-the-loop gesture control framework that can dynamically adapt audio and music based on human movement by analyzing live video input. By creating a responsive connection between visual and auditory stimuli, this system enables dancers and performers to not only respond to music but also influence it through their movements. Designed for live performances, interactive installations, and personal use, it offers an immersive experience where users can shape the music in real time. The framework integrates computer vision and machine learning techniques to track and interpret motion, allowing users to manipulate audio elements such as tempo, pitch, effects, and playback sequence. With ongoing training, it achieves user-independent functionality, requiring as few as 50 to 80 samples to label simple gestures. This framework combines gesture training, cue mapping, and audio manipulation to create a dynamic, interactive experience. Gestures are interpreted as input signals, mapped to sound control commands, and used to naturally adjust music elements, showcasing the seamless interplay between human interaction and machine response.

Mahya Khazaei、Ali Bahrani、George Tzanetakis

计算技术、计算机技术电子技术应用

Mahya Khazaei,Ali Bahrani,George Tzanetakis.A Real-Time Gesture-Based Control Framework[EB/OL].(2025-04-27)[2025-06-07].https://arxiv.org/abs/2504.19460.点此复制

评论