On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions
On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions
Understanding and mitigating biases is critical for the adoption of large language models (LLMs) in high-stakes decision-making. We introduce Admissions and Hiring, decision tasks with hypothetical applicant profiles where a person's race can be inferred from their name, as simplified test beds for racial bias. We show that Gemma 2B Instruct and LLaMA 3.2 3B Instruct exhibit strong biases. Gemma grants admission to 26% more White than Black applicants, and LLaMA hires 60% more Asian than White applicants. We demonstrate that these biases are resistant to prompt engineering: multiple prompting strategies all fail to promote fairness. In contrast, using distributed alignment search, we can identify "race subspaces" within model activations and intervene on them to debias model decisions. Averaging the representation across all races within the subspaces reduces Gemma's bias by 37-57%. Finally, we examine the generalizability of Gemma's race subspaces, and find limited evidence for generalization, where changing the prompt format can affect the race representation. Our work suggests mechanistic approaches may provide a promising venue for improving the fairness of LLMs, but a universal race representation remains elusive.
Dang Nguyen、Chenhao Tan
计算技术、计算机技术
Dang Nguyen,Chenhao Tan.On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions[EB/OL].(2025-04-07)[2025-04-26].https://arxiv.org/abs/2504.06303.点此复制
评论