|国家预印本平台
首页|Overcoming catastrophic forgetting in neural networks

Overcoming catastrophic forgetting in neural networks

Overcoming catastrophic forgetting in neural networks

来源:Arxiv_logoArxiv
英文摘要

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.

Joel Veness、Kieran Milan、Dharshan Kumaran、Demis Hassabis、James Kirkpatrick、Andrei A. Rusu、Guillaume Desjardins、Neil Rabinowitz、Razvan Pascanu、Agnieszka Grabska-Barwinska、Raia Hadsell、John Quan、Claudia Clopath、Tiago Ramalho

10.1073/pnas.1611835114

计算技术、计算机技术

Joel Veness,Kieran Milan,Dharshan Kumaran,Demis Hassabis,James Kirkpatrick,Andrei A. Rusu,Guillaume Desjardins,Neil Rabinowitz,Razvan Pascanu,Agnieszka Grabska-Barwinska,Raia Hadsell,John Quan,Claudia Clopath,Tiago Ramalho.Overcoming catastrophic forgetting in neural networks[EB/OL].(2016-12-02)[2025-08-08].https://arxiv.org/abs/1612.00796.点此复制

评论