|国家预印本平台
首页|Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation

Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation

Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation

来源:Arxiv_logoArxiv
英文摘要

Recently, major AI service providers such as Google and OpenAI have introduced Finetuning-as-a-Service, which enables users to customize Large Language Models (LLMs) for specific downstream tasks using their own data. However, this service is vulnerable to degradation of LLM safety-alignment when user data contains harmful prompts. While some prior works address this issue, fundamentally filtering harmful data from user data remains unexplored. Motivated by our observation that a directional representation reflecting refusal behavior (called the refusal feature) obtained from safety-aligned LLMs can inherently distinguish between harmful and harmless prompts, we propose the Refusal-Feature-guided Teacher (ReFT). Our ReFT model is trained to identify harmful prompts based on the similarity between input prompt features and its refusal feature. During finetuning, the ReFT model serves as a teacher that filters harmful prompts from user data and distills alignment knowledge into the base model. Extensive experiments demonstrate that our ReFT-based finetuning strategy effectively minimizes harmful outputs and enhances finetuning accuracy for user-specific tasks, offering a practical solution for secure and reliable deployment of LLMs in Finetuning-as-a-Service.

Seokil Ham、Yubin Choi、Seungju Cho、Yujin Yang、Younghun Kim、Changick Kim

计算技术、计算机技术

Seokil Ham,Yubin Choi,Seungju Cho,Yujin Yang,Younghun Kim,Changick Kim.Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation[EB/OL].(2025-06-08)[2025-07-02].https://arxiv.org/abs/2506.07356.点此复制

评论