|国家预印本平台
首页|Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA

Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA

Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA

来源:Arxiv_logoArxiv
英文摘要

Pretraining datasets are foundational to the development of multimodal models, yet they often have inherent biases and toxic content from the web-scale corpora they are sourced from. In this paper, we investigate the prevalence of toxicity in LLaVA image-text pretraining dataset, examining how harmful content manifests in different modalities. We present a comprehensive analysis of common toxicity categories and propose targeted mitigation strategies, resulting in the creation of a refined toxicity-mitigated dataset. This dataset removes 7,531 of toxic image-text pairs in the LLaVA pre-training dataset. We offer guidelines for implementing robust toxicity detection pipelines. Our findings underscore the need to actively identify and filter toxic content - such as hate speech, explicit imagery, and targeted harassment - to build more responsible and equitable multimodal systems. The toxicity-mitigated dataset is open source and is available for further research.

Surya Guthikonda、Karthik Reddy Kanjula、Nahid Alam、Shayekh Bin Islam

计算技术、计算机技术

Surya Guthikonda,Karthik Reddy Kanjula,Nahid Alam,Shayekh Bin Islam.Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA[EB/OL].(2025-05-09)[2025-08-02].https://arxiv.org/abs/2505.06356.点此复制

评论