Multi-Lingual Implicit Discourse Relation Recognition with Multi-Label Hierarchical Learning
Multi-Lingual Implicit Discourse Relation Recognition with Multi-Label Hierarchical Learning
This paper introduces the first multi-lingual and multi-label classification model for implicit discourse relation recognition (IDRR). Our model, HArch, is evaluated on the recently released DiscoGeM 2.0 corpus and leverages hierarchical dependencies between discourse senses to predict probability distributions across all three sense levels in the PDTB 3.0 framework. We compare several pre-trained encoder backbones and find that RoBERTa-HArch achieves the best performance in English, while XLM-RoBERTa-HArch performs best in the multi-lingual setting. In addition, we compare our fine-tuned models against GPT-4o and Llama-4-Maverick using few-shot prompting across all language configurations. Our results show that our fine-tuned models consistently outperform these LLMs, highlighting the advantages of task-specific fine-tuning over prompting in IDRR. Finally, we report SOTA results on the DiscoGeM 1.0 corpus, further validating the effectiveness of our hierarchical approach.
Nelson Filipe Costa、Leila Kosseim
语言学计算技术、计算机技术
Nelson Filipe Costa,Leila Kosseim.Multi-Lingual Implicit Discourse Relation Recognition with Multi-Label Hierarchical Learning[EB/OL].(2025-08-28)[2025-09-06].https://arxiv.org/abs/2508.20712.点此复制
评论