Motor Cortex Encodes A Temporal Difference Reinforcement Learning Process
Motor Cortex Encodes A Temporal Difference Reinforcement Learning Process
Abstract Temporal difference reinforcement learning (TDRL) accurately models associative learning observed in animals, where they learn to associate outcome predicting environmental states, termed conditioned stimuli (CS), with the value of outcomes, such as rewards, termed unconditioned stimuli (US). A component of TDRL is the value function, which captures the expected cumulative future reward from a given state. The value function can be modified by changes in the animal’s knowledge, such as by the predictability of its environment. Here we show that primary motor cortical (M1) neurodynamics reflect a TD learning process, encoding a state value function and reward prediction error in line with TDRL. M1 responds to the delivery of reward, and shifts its value related response earlier in a trial, becoming predictive of an expected reward, when reward is predictable due to a CS. This is observed in tasks performed manually or observed passively, as well as in tasks without an explicit CS predicting reward, but simply with a predictable temporal structure, that is a predictable environment. M1 also encodes the expected reward value associated with a set of CS in a multiple reward level CS-US task. Here we extend the Microstimulus TDRL model, reported to accurately capture RL related dopaminergic activity, to account for M1 reward related neural activity in a multitude of tasks. Significance statementThere is a great deal of agreement between aspects of temporal difference reinforcement learning (TDRL) models and neural activity in dopaminergic brain centers. Dopamine is know to be necessary for sensorimotor learning induced synaptic plasticity in the motor cortex (M1), and thus one might expect to see the hallmarks of TDRL in M1, which we show here in the form of a state value function and reward prediction error during. We see these hallmarks even when a conditioned stimulus is not available, but the environment is predictable, during manual tasks with agency, as well as observational tasks without agency. This information has implications towards autonomously updating brain machine interfaces as others and we have proposed and published on.
McNiel David B、Francis Joseph T、Marsh Brandi T、Tarigoppula Venkata S Aditya、Hessburg John P、Choi John S
Department of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of HoustonDepartment of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of Houston||Department of Biomedical Engineering, University of HoustonDepartment of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of HoustonDepartment of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of Houston||Department of Biomedical Engineering, University of HoustonDepartment of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of HoustonDepartment of Physiology and Pharmacology, The Robert F Furchgott Center for Neural, University of Houston
生物科学现状、生物科学发展生物科学研究方法、生物科学研究技术生理学
McNiel David B,Francis Joseph T,Marsh Brandi T,Tarigoppula Venkata S Aditya,Hessburg John P,Choi John S.Motor Cortex Encodes A Temporal Difference Reinforcement Learning Process[EB/OL].(2025-03-28)[2025-05-08].https://www.biorxiv.org/content/10.1101/257337.点此复制
评论