Enhancing Adaptive Behavioral Interventions with LLM Inference from Participant-Described States
Enhancing Adaptive Behavioral Interventions with LLM Inference from Participant-Described States
The use of reinforcement learning (RL) methods to support health behavior change via personalized and just-in-time adaptive interventions is of significant interest to health and behavioral science researchers focused on problems such as smoking cessation support and physical activity promotion. However, RL methods are often applied to these domains using a small collection of context variables to mitigate the significant data scarcity issues that arise from practical limitations on the design of adaptive intervention trials. In this paper, we explore an approach to significantly expanding the state space of an adaptive intervention without impacting data efficiency. The proposed approach enables intervention participants to provide natural language descriptions of aspects of their current state. It then leverages inference with pre-trained large language models (LLMs) to better align the policy of a base RL method with these state descriptions. To evaluate our method, we develop a novel physical activity intervention simulation environment that generates text-based state descriptions conditioned on latent state variables using an auxiliary LLM. We show that this approach has the potential to significantly improve the performance of online policy learning methods.
Karine Karine、Benjamin M. Marlin
医学研究方法
Karine Karine,Benjamin M. Marlin.Enhancing Adaptive Behavioral Interventions with LLM Inference from Participant-Described States[EB/OL].(2025-07-05)[2025-07-16].https://arxiv.org/abs/2507.03871.点此复制
评论