|国家预印本平台
首页|Multi-Armed Bandits Meet Large Language Models

Multi-Armed Bandits Meet Large Language Models

Multi-Armed Bandits Meet Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Bandit algorithms and Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, each addressing distinct yet complementary challenges in decision-making and natural language processing. This survey explores the synergistic potential between these two fields, highlighting how bandit algorithms can enhance the performance of LLMs and how LLMs, in turn, can provide novel insights for improving bandit-based decision-making. We first examine the role of bandit algorithms in optimizing LLM fine-tuning, prompt engineering, and adaptive response generation, focusing on their ability to balance exploration and exploitation in large-scale learning tasks. Subsequently, we explore how LLMs can augment bandit algorithms through advanced contextual understanding, dynamic adaptation, and improved policy selection using natural language reasoning. By providing a comprehensive review of existing research and identifying key challenges and opportunities, this survey aims to bridge the gap between bandit algorithms and LLMs, paving the way for innovative applications and interdisciplinary research in AI.

Djallel Bouneffouf、Raphael Feraud

计算技术、计算机技术

Djallel Bouneffouf,Raphael Feraud.Multi-Armed Bandits Meet Large Language Models[EB/OL].(2025-05-19)[2025-07-22].https://arxiv.org/abs/2505.13355.点此复制

评论