Continual Generalized Category Discovery: Learning and Forgetting from a Bayesian Perspective
Continual Generalized Category Discovery: Learning and Forgetting from a Bayesian Perspective
Continual Generalized Category Discovery (C-GCD) faces a critical challenge: incrementally learning new classes from unlabeled data streams while preserving knowledge of old classes. Existing methods struggle with catastrophic forgetting, especially when unlabeled data mixes known and novel categories. We address this by analyzing C-GCD's forgetting dynamics through a Bayesian lens, revealing that covariance misalignment between old and new classes drives performance degradation. Building on this insight, we propose Variational Bayes C-GCD (VB-CGCD), a novel framework that integrates variational inference with covariance-aware nearest-class-mean classification. VB-CGCD adaptively aligns class distributions while suppressing pseudo-label noise via stochastic variational updates. Experiments show VB-CGCD surpasses prior art by +15.21% with the overall accuracy in the final session on standard benchmarks. We also introduce a new challenging benchmark with only 10% labeled data and extended online phases, VB-CGCD achieves a 67.86% final accuracy, significantly higher than state-of-the-art (38.55%), demonstrating its robust applicability across diverse scenarios. Code is available at: https://github.com/daihao42/VB-CGCD
Hao Dai、Jagmohan Chauhan
计算技术、计算机技术
Hao Dai,Jagmohan Chauhan.Continual Generalized Category Discovery: Learning and Forgetting from a Bayesian Perspective[EB/OL].(2025-07-23)[2025-08-18].https://arxiv.org/abs/2507.17382.点此复制
评论