|国家预印本平台
首页|Large Language Models for EEG: A Comprehensive Survey and Taxonomy

Large Language Models for EEG: A Comprehensive Survey and Taxonomy

Large Language Models for EEG: A Comprehensive Survey and Taxonomy

来源:Arxiv_logoArxiv
英文摘要

The growing convergence between Large Language Models (LLMs) and electroencephalography (EEG) research is enabling new directions in neural decoding, brain-computer interfaces (BCIs), and affective computing. This survey offers a systematic review and structured taxonomy of recent advancements that utilize LLMs for EEG-based analysis and applications. We organize the literature into four domains: (1) LLM-inspired foundation models for EEG representation learning, (2) EEG-to-language decoding, (3) cross-modal generation including image and 3D object synthesis, and (4) clinical applications and dataset management tools. The survey highlights how transformer-based architectures adapted through fine-tuning, few-shot, and zero-shot learning have enabled EEG-based models to perform complex tasks such as natural language generation, semantic interpretation, and diagnostic assistance. By offering a structured overview of modeling strategies, system designs, and application areas, this work serves as a foundational resource for future work to bridge natural language processing and neural signal analysis through language models.

Naseem Babu、Jimson Mathew、A. P. Vinod

计算技术、计算机技术

Naseem Babu,Jimson Mathew,A. P. Vinod.Large Language Models for EEG: A Comprehensive Survey and Taxonomy[EB/OL].(2025-06-02)[2025-06-19].https://arxiv.org/abs/2506.06353.点此复制

评论