|国家预印本平台
首页|Pre-Training and Fine-Tuning Generative Flow Networks

Pre-Training and Fine-Tuning Generative Flow Networks

Pre-Training and Fine-Tuning Generative Flow Networks

来源:Arxiv_logoArxiv
英文摘要

Generative Flow Networks (GFlowNets) are amortized samplers that learn stochastic policies to sequentially generate compositional objects from a given unnormalized reward distribution. They can generate diverse sets of high-reward objects, which is an important consideration in scientific discovery tasks. However, as they are typically trained from a given extrinsic reward function, it remains an important open challenge about how to leverage the power of pre-training and train GFlowNets in an unsupervised fashion for efficient adaptation to downstream tasks. Inspired by recent successes of unsupervised pre-training in various domains, we introduce a novel approach for reward-free pre-training of GFlowNets. By framing the training as a self-supervised problem, we propose an outcome-conditioned GFlowNet (OC-GFN) that learns to explore the candidate space. Specifically, OC-GFN learns to reach any targeted outcomes, akin to goal-conditioned policies in reinforcement learning. We show that the pre-trained OC-GFN model can allow for a direct extraction of a policy capable of sampling from any new reward functions in downstream tasks. Nonetheless, adapting OC-GFN on a downstream task-specific reward involves an intractable marginalization over possible outcomes. We propose a novel way to approximate this marginalization by learning an amortized predictor enabling efficient fine-tuning. Extensive experimental results validate the efficacy of our approach, demonstrating the effectiveness of pre-training the OC-GFN, and its ability to swiftly adapt to downstream tasks and discover modes more efficiently. This work may serve as a foundation for further exploration of pre-training strategies in the context of GFlowNets.

Yoshua Bengio、Kanika Madan、Ling Pan、Moksh Jain

计算技术、计算机技术

Yoshua Bengio,Kanika Madan,Ling Pan,Moksh Jain.Pre-Training and Fine-Tuning Generative Flow Networks[EB/OL].(2023-10-05)[2025-07-25].https://arxiv.org/abs/2310.03419.点此复制

评论