BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them
BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them
Understanding biases and stereotypes encoded in the weights of Large Language Models (LLMs) is crucial for developing effective mitigation strategies. Biased behaviour is often subtle and non-trivial to isolate, even when deliberately elicited, making systematic analysis and debiasing particularly challenging. To address this, we introduce BiasGym, a simple, cost-effective, and generalizable framework for reliably injecting, analyzing, and mitigating conceptual associations within LLMs. BiasGym consists of two components: BiasInject, which injects specific biases into the model via token-based fine-tuning while keeping the model frozen, and BiasScope, which leverages these injected signals to identify and steer the components responsible for biased behavior. Our method enables consistent bias elicitation for mechanistic analysis, supports targeted debiasing without degrading performance on downstream tasks, and generalizes to biases unseen during token-based fine-tuning. We demonstrate the effectiveness of BiasGym in reducing real-world stereotypes (e.g., people from Italy being `reckless drivers') and in probing fictional associations (e.g., people from a fictional country having `blue skin'), showing its utility for both safety interventions and interpretability research.
Sekh Mainul Islam、Nadav Borenstein、Siddhesh Milind Pawar、Haeun Yu、Arnav Arora、Isabelle Augenstein
信息传播、知识传播科学、科学研究
Sekh Mainul Islam,Nadav Borenstein,Siddhesh Milind Pawar,Haeun Yu,Arnav Arora,Isabelle Augenstein.BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.08855.点此复制
评论