An Information-theoretic Multi-task Representation Learning Framework for Natural Language Understanding
An Information-theoretic Multi-task Representation Learning Framework for Natural Language Understanding
This paper proposes a new principled multi-task representation learning framework (InfoMTL) to extract noise-invariant sufficient representations for all tasks. It ensures sufficiency of shared representations for all tasks and mitigates the negative effect of redundant features, which can enhance language understanding of pre-trained language models (PLMs) under the multi-task paradigm. Firstly, a shared information maximization principle is proposed to learn more sufficient shared representations for all target tasks. It can avoid the insufficiency issue arising from representation compression in the multi-task paradigm. Secondly, a task-specific information minimization principle is designed to mitigate the negative effect of potential redundant features in the input for each task. It can compress task-irrelevant redundant information and preserve necessary information relevant to the target for multi-task prediction. Experiments on six classification benchmarks show that our method outperforms 12 comparative multi-task methods under the same multi-task settings, especially in data-constrained and noisy scenarios. Extensive experiments demonstrate that the learned representations are more sufficient, data-efficient, and robust.
Wei Zhou、Dou Hu、Lingwei Wei、Songlin Hu
计算技术、计算机技术
Wei Zhou,Dou Hu,Lingwei Wei,Songlin Hu.An Information-theoretic Multi-task Representation Learning Framework for Natural Language Understanding[EB/OL].(2025-03-06)[2025-05-23].https://arxiv.org/abs/2503.04667.点此复制
评论