|国家预印本平台
首页|Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test

Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test

Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test

来源:Arxiv_logoArxiv
英文摘要

As API access becomes a primary interface to large language models (LLMs), users often interact with black-box systems that offer little transparency into the deployed model. To reduce costs or maliciously alter model behaviors, API providers may discreetly serve quantized or fine-tuned variants, which can degrade performance and compromise safety. Detecting such substitutions is difficult, as users lack access to model weights and, in most cases, even output logits. To tackle this problem, we propose a rank-based uniformity test that can verify the behavioral equality of a black-box LLM to a locally deployed authentic model. Our method is accurate, query-efficient, and avoids detectable query patterns, making it robust to adversarial providers that reroute or mix responses upon the detection of testing attempts. We evaluate the approach across diverse threat scenarios, including quantization, harmful fine-tuning, jailbreak prompts, and full model substitution, showing that it consistently achieves superior statistical power over prior methods under constrained query budgets.

Xiaoyuan Zhu、Yaowen Ye、Tianyi Qiu、Hanlin Zhu、Sijun Tan、Ajraf Mannan、Jonathan Michala、Raluca Ada Popa、Willie Neiswanger

计算技术、计算机技术

Xiaoyuan Zhu,Yaowen Ye,Tianyi Qiu,Hanlin Zhu,Sijun Tan,Ajraf Mannan,Jonathan Michala,Raluca Ada Popa,Willie Neiswanger.Auditing Black-Box LLM APIs with a Rank-Based Uniformity Test[EB/OL].(2025-06-07)[2025-06-27].https://arxiv.org/abs/2506.06975.点此复制

评论