|国家预印本平台
首页|Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives

Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives

Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives

来源:Arxiv_logoArxiv
英文摘要

The Chain-of-Thought (CoT) paradigm has become a pivotal method for solving complex problems with large language models (LLMs). However, its application to domain-specific tasks remains challenging, as LLMs often fail to decompose tasks accurately or execute subtasks effectively. This paper introduces the Re-TASK framework, a novel theoretical model that revisits LLM tasks from capability, skill, and knowledge perspectives, drawing on the principles of Bloom's Taxonomy and Knowledge Space Theory. While CoT provides a workflow-centric perspective on tasks, Re-TASK introduces a Chain-of-Learning (CoL) paradigm that highlights task dependencies on specific capability items, further broken down into their constituent knowledge and skill components. To address CoT failures, we propose a Re-TASK prompting strategy, which strengthens task-relevant capabilities through targeted knowledge injection and skill adaptation. Experiments across diverse domains demonstrate the effectiveness of Re-TASK. In particular, we achieve improvements of 45.00% on Yi-1.5-9B and 24.50% on Llama3-Chinese-8B for legal tasks. These results highlight the potential of Re-TASK to significantly enhance LLM performance and its applicability in specialized domains. We release our code and data at https://github.com/Uylee/Re-TASK.

Zhihu Wang、Shiwan Zhao、Yu Wang、Heyuan Huang、Sitao Xie、Yubo Zhang、Jiaxin Shi、Zhixing Wang、Hongyan Li、Junchi Yan

计算技术、计算机技术法律

Zhihu Wang,Shiwan Zhao,Yu Wang,Heyuan Huang,Sitao Xie,Yubo Zhang,Jiaxin Shi,Zhixing Wang,Hongyan Li,Junchi Yan.Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives[EB/OL].(2025-06-19)[2025-07-16].https://arxiv.org/abs/2408.06904.点此复制

评论