|国家预印本平台
首页|Street-Level AI: Are Large Language Models Ready for Real-World Judgments?

Street-Level AI: Are Large Language Models Ready for Real-World Judgments?

Street-Level AI: Are Large Language Models Ready for Real-World Judgments?

来源:Arxiv_logoArxiv
英文摘要

A surge of recent work explores the ethical and societal implications of large-scale AI models that make "moral" judgments. Much of this literature focuses either on alignment with human judgments through various thought experiments or on the group fairness implications of AI judgments. However, the most immediate and likely use of AI is to help or fully replace the so-called street-level bureaucrats, the individuals deciding to allocate scarce social resources or approve benefits. There is a rich history underlying how principles of local justice determine how society decides on prioritization mechanisms in such domains. In this paper, we examine how well LLM judgments align with human judgments, as well as with socially and politically determined vulnerability scoring systems currently used in the domain of homelessness resource allocation. Crucially, we use real data on those needing services (maintaining strict confidentiality by only using local large models) to perform our analyses. We find that LLM prioritizations are extremely inconsistent in several ways: internally on different runs, between different LLMs, and between LLMs and the vulnerability scoring systems. At the same time, LLMs demonstrate qualitative consistency with lay human judgments in pairwise testing. Findings call into question the readiness of current generation AI systems for naive integration in high-stakes societal decision-making.

Gaurab Pokharel、Shafkat Farabi、Patrick J. Fowler、Sanmay Das

科学、科学研究计算技术、计算机技术

Gaurab Pokharel,Shafkat Farabi,Patrick J. Fowler,Sanmay Das.Street-Level AI: Are Large Language Models Ready for Real-World Judgments?[EB/OL].(2025-08-11)[2025-08-24].https://arxiv.org/abs/2508.08193.点此复制

评论