|国家预印本平台
首页|Large Language Models' Internal Perception of Symbolic Music

Large Language Models' Internal Perception of Symbolic Music

Large Language Models' Internal Perception of Symbolic Music

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) excel at modeling relationships between strings in natural language and have shown promise in extending to other symbolic domains like coding or mathematics. However, the extent to which they implicitly model symbolic music remains underexplored. This paper investigates how LLMs represent musical concepts by generating symbolic music data from textual prompts describing combinations of genres and styles, and evaluating their utility through recognition and generation tasks. We produce a dataset of LLM-generated MIDI files without relying on explicit musical training. We then train neural networks entirely on this LLM-generated MIDI dataset and perform genre and style classification as well as melody completion, benchmarking their performance against established models. Our results demonstrate that LLMs can infer rudimentary musical structures and temporal relationships from text, highlighting both their potential to implicitly encode musical patterns and their limitations due to a lack of explicit musical context, shedding light on their generative capabilities for symbolic music.

Andrew Shin、Kunitake Kaneko

计算技术、计算机技术

Andrew Shin,Kunitake Kaneko.Large Language Models' Internal Perception of Symbolic Music[EB/OL].(2025-07-17)[2025-08-10].https://arxiv.org/abs/2507.12808.点此复制

评论