|国家预印本平台
首页|Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System

Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System

Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) offer promising opportunities for organizational research. However, their built-in moderation systems can create problems when researchers try to analyze harmful content, often refusing to follow certain instructions or producing overly cautious responses that undermine validity of the results. This is particularly problematic when analyzing organizational conflicts such as microaggressions or hate speech. This paper introduces an Elo rating-based method that significantly improves LLM performance for harmful content analysis In two datasets, one focused on microaggression detection and the other on hate speech, we find that our method outperforms traditional LLM prompting techniques and conventional machine learning models on key measures such as accuracy, precision, and F1 scores. Advantages include better reliability when analyzing harmful content, fewer false positives, and greater scalability for large-scale datasets. This approach supports organizational applications, including detecting workplace harassment, assessing toxic communication, and fostering safer and more inclusive work environments.

Mustafa Akben、Aaron Satko

计算技术、计算机技术

Mustafa Akben,Aaron Satko.Advancing Harmful Content Detection in Organizational Research: Integrating Large Language Models with Elo Rating System[EB/OL].(2025-06-19)[2025-07-19].https://arxiv.org/abs/2506.16575.点此复制

评论