|国家预印本平台
首页|SafeLawBench: Towards Safe Alignment of Large Language Models

SafeLawBench: Towards Safe Alignment of Large Language Models

SafeLawBench: Towards Safe Alignment of Large Language Models

来源:Arxiv_logoArxiv
英文摘要

With the growing prevalence of large language models (LLMs), the safety of LLMs has raised significant concerns. However, there is still a lack of definitive standards for evaluating their safety due to the subjective nature of current safety benchmarks. To address this gap, we conducted the first exploration of LLMs' safety evaluation from a legal perspective by proposing the SafeLawBench benchmark. SafeLawBench categorizes safety risks into three levels based on legal standards, providing a systematic and comprehensive framework for evaluation. It comprises 24,860 multi-choice questions and 1,106 open-domain question-answering (QA) tasks. Our evaluation included 2 closed-source LLMs and 18 open-source LLMs using zero-shot and few-shot prompting, highlighting the safety features of each model. We also evaluated the LLMs' safety-related reasoning stability and refusal behavior. Additionally, we found that a majority voting mechanism can enhance model performance. Notably, even leading SOTA models like Claude-3.5-Sonnet and GPT-4o have not exceeded 80.5% accuracy in multi-choice tasks on SafeLawBench, while the average accuracy of 20 LLMs remains at 68.8\%. We urge the community to prioritize research on the safety of LLMs.

Chuxue Cao、Han Zhu、Jiaming Ji、Qichao Sun、Zhenghao Zhu、Yinyu Wu、Juntao Dai、Yaodong Yang、Sirui Han、Yike Guo

法律

Chuxue Cao,Han Zhu,Jiaming Ji,Qichao Sun,Zhenghao Zhu,Yinyu Wu,Juntao Dai,Yaodong Yang,Sirui Han,Yike Guo.SafeLawBench: Towards Safe Alignment of Large Language Models[EB/OL].(2025-06-06)[2025-06-30].https://arxiv.org/abs/2506.06636.点此复制

评论