|国家预印本平台
首页|From BERT to Qwen: Hate Detection across architectures

From BERT to Qwen: Hate Detection across architectures

From BERT to Qwen: Hate Detection across architectures

来源:Arxiv_logoArxiv
英文摘要

Online platforms struggle to curb hate speech without over-censoring legitimate discourse. Early bidirectional transformer encoders made big strides, but the arrival of ultra-large autoregressive LLMs promises deeper context-awareness. Whether this extra scale actually improves practical hate-speech detection on real-world text remains unverified. Our study puts this question to the test by benchmarking both model families, classic encoders and next-generation LLMs, on curated corpora of online interactions for hate-speech detection (Hate or No Hate).

Ariadna Mon、Saúl Fenollosa、Jon Lecumberri

计算技术、计算机技术

Ariadna Mon,Saúl Fenollosa,Jon Lecumberri.From BERT to Qwen: Hate Detection across architectures[EB/OL].(2025-07-14)[2025-07-23].https://arxiv.org/abs/2507.10468.点此复制

评论