|国家预印本平台
首页|Many-Task Federated Fine-Tuning via Unified Task Vectors

Many-Task Federated Fine-Tuning via Unified Task Vectors

Many-Task Federated Fine-Tuning via Unified Task Vectors

来源:Arxiv_logoArxiv
英文摘要

Federated Learning (FL) traditionally assumes homogeneous client tasks; however, in real-world scenarios, clients often specialize in diverse tasks, introducing task heterogeneity. To address this challenge, Many-Task FL (MaT-FL) has emerged, enabling clients to collaborate effectively despite task diversity. Existing MaT-FL approaches rely on client grouping or personalized layers, requiring the server to manage individual models and failing to account for clients handling multiple tasks. We propose MaTU, a MaT-FL approach that enables joint learning of task vectors across clients, eliminating the need for clustering or client-specific weight storage at the server. Our method introduces a novel aggregation mechanism that determines task similarity based on the direction of clients task vectors and constructs a unified task vector encapsulating all tasks. To address task-specific requirements, we augment the unified task vector with lightweight modulators that facilitate knowledge transfer among related tasks while disentangling dissimilar ones. Evaluated across 30 datasets, MaTU achieves superior performance over state-of-the-art MaT-FL approaches, with results comparable to per-task fine-tuning, while delivering significant communication savings.

Vasileios Tsouvalas、Tanir Ozcelebi、Nirvana Meratnia

计算技术、计算机技术

Vasileios Tsouvalas,Tanir Ozcelebi,Nirvana Meratnia.Many-Task Federated Fine-Tuning via Unified Task Vectors[EB/OL].(2025-07-08)[2025-07-25].https://arxiv.org/abs/2502.06376.点此复制

评论