Fine-Tuning MIDI-to-Audio Alignment using a Neural Network on Piano Roll and CQT Representations
Fine-Tuning MIDI-to-Audio Alignment using a Neural Network on Piano Roll and CQT Representations
In this paper, we present a neural network approach for synchronizing audio recordings of human piano performances with their corresponding loosely aligned MIDI files. The task is addressed using a Convolutional Recurrent Neural Network (CRNN) architecture, which effectively captures spectral and temporal features by processing an unaligned piano roll and a spectrogram as inputs to estimate the aligned piano roll. To train the network, we create a dataset of piano pieces with augmented MIDI files that simulate common human timing errors. The proposed model achieves up to 20% higher alignment accuracy than the industry-standard Dynamic Time Warping (DTW) method across various tolerance windows. Furthermore, integrating DTW with the CRNN yields additional improvements, offering enhanced robustness and consistency. These findings demonstrate the potential of neural networks in advancing state-of-the-art MIDI-to-audio alignment.
Sebastian Murgul、Moritz Reiser、Michael Heizmann、Christoph Seibert
计算技术、计算机技术
Sebastian Murgul,Moritz Reiser,Michael Heizmann,Christoph Seibert.Fine-Tuning MIDI-to-Audio Alignment using a Neural Network on Piano Roll and CQT Representations[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2506.22237.点此复制
评论