|国家预印本平台
首页|HInter: Exposing Hidden Intersectional Bias in Large Language Models

HInter: Exposing Hidden Intersectional Bias in Large Language Models

HInter: Exposing Hidden Intersectional Bias in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) may portray discrimination towards certain individuals, especially those characterized by multiple attributes (aka intersectional bias). Discovering intersectional bias in LLMs is challenging, as it involves complex inputs on multiple attributes (e.g. race and gender). To address this challenge, we propose HInter, a test technique that synergistically combines mutation analysis, dependency parsing and metamorphic oracles to automatically detect intersectional bias in LLMs. HInter generates test inputs by systematically mutating sentences using multiple mutations, validates inputs via a dependency invariant and detects biases by checking the LLM response on the original and mutated sentences. We evaluate HInter using six LLM architectures and 18 LLM models (GPT3.5, Llama2, BERT, etc) and find that 14.61% of the inputs generated by HInter expose intersectional bias. Results also show that our dependency invariant reduces false positives (incorrect test inputs) by an order of magnitude. Finally, we observed that 16.62% of intersectional bias errors are hidden, meaning that their corresponding atomic cases do not trigger biases. Overall, this work emphasize the importance of testing LLMs for intersectional bias.

Badr Souani、Ezekiel Soremekun、Mike Papadakis、Setsuko Yokoyama、Sudipta Chattopadhyay、Yves Le Traon

计算技术、计算机技术

Badr Souani,Ezekiel Soremekun,Mike Papadakis,Setsuko Yokoyama,Sudipta Chattopadhyay,Yves Le Traon.HInter: Exposing Hidden Intersectional Bias in Large Language Models[EB/OL].(2025-03-14)[2025-05-28].https://arxiv.org/abs/2503.11962.点此复制

评论