TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data
TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data
Recent advances in foundation models, such as LLMs, have revolutionized conversational AI. Chatbots are increasingly being developed by customizing LLMs on specific conversational datasets. However, mitigating toxicity during this customization, especially when dealing with untrusted training data, remains a significant challenge. To address this, we introduce TuneShield, a defense framework designed to mitigate toxicity during chatbot fine-tuning while preserving conversational quality. TuneShield leverages LLM-based toxicity classification, utilizing the instruction-following capabilities and safety alignment of LLMs to effectively identify toxic samples, outperforming industry API services. TuneShield generates synthetic conversation samples, termed 'healing data', based on the identified toxic samples, using them to mitigate toxicity while reinforcing desirable behavior during fine-tuning. It performs an alignment process to further nudge the chatbot towards producing desired responses. Our findings show that TuneShield effectively mitigates toxicity injection attacks while preserving conversational quality, even when the toxicity classifiers are imperfect or biased. TuneShield proves to be resilient against adaptive adversarial and jailbreak attacks. Additionally, TuneShield demonstrates effectiveness in mitigating adaptive toxicity injection attacks during dialog-based learning (DBL).
Aravind Cheruvu、Shravya Kanchi、Sifat Muhammad Abdullah、Nicholas Kong、Daphne Yao、Murtuza Jadliwala、Bimal Viswanath
计算技术、计算机技术
Aravind Cheruvu,Shravya Kanchi,Sifat Muhammad Abdullah,Nicholas Kong,Daphne Yao,Murtuza Jadliwala,Bimal Viswanath.TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data[EB/OL].(2025-07-08)[2025-07-21].https://arxiv.org/abs/2507.05660.点此复制
评论