PakBBQ: A Culturally Adapted Bias Benchmark for QA
PakBBQ: A Culturally Adapted Bias Benchmark for QA
With the widespread adoption of Large Language Models (LLMs) across various applications, it is empirical to ensure their fairness across all user communities. However, most LLMs are trained and evaluated on Western centric data, with little attention paid to low-resource languages and regional contexts. To address this gap, we introduce PakBBQ, a culturally and regionally adapted extension of the original Bias Benchmark for Question Answering (BBQ) dataset. PakBBQ comprises over 214 templates, 17180 QA pairs across 8 categories in both English and Urdu, covering eight bias dimensions including age, disability, appearance, gender, socio-economic status, religious, regional affiliation, and language formality that are relevant in Pakistan. We evaluate multiple multilingual LLMs under both ambiguous and explicitly disambiguated contexts, as well as negative versus non negative question framings. Our experiments reveal (i) an average accuracy gain of 12\% with disambiguation, (ii) consistently stronger counter bias behaviors in Urdu than in English, and (iii) marked framing effects that reduce stereotypical responses when questions are posed negatively. These findings highlight the importance of contextualized benchmarks and simple prompt engineering strategies for bias mitigation in low resource settings.
Abdullah Hashmat、Muhammad Arham Mirza、Agha Ali Raza
语言学常用外国语印欧语系
Abdullah Hashmat,Muhammad Arham Mirza,Agha Ali Raza.PakBBQ: A Culturally Adapted Bias Benchmark for QA[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.10186.点此复制
评论