|国家预印本平台
首页|GPT, But Backwards: Exactly Inverting Language Model Outputs

GPT, But Backwards: Exactly Inverting Language Model Outputs

GPT, But Backwards: Exactly Inverting Language Model Outputs

来源:Arxiv_logoArxiv
英文摘要

While existing auditing techniques attempt to identify potential unwanted behaviours in large language models (LLMs), we address the complementary forensic problem of reconstructing the exact input that led to an existing LLM output - enabling post-incident analysis and potentially the detection of fake output reports. We formalize exact input reconstruction as a discrete optimisation problem with a unique global minimum and introduce SODA, an efficient gradient-based algorithm that operates on a continuous relaxation of the input search space with periodic restarts and parameter decay. Through comprehensive experiments on LLMs ranging in size from 33M to 3B parameters, we demonstrate that SODA significantly outperforms existing approaches. We succeed in fully recovering 79.5% of shorter out-of-distribution inputs from next-token logits, without a single false positive, but struggle to extract private information from the outputs of longer (15+ token) input sequences. This suggests that standard deployment practices may currently provide adequate protection against malicious use of our method. Our code is available at https://doi.org/10.5281/zenodo.15539879.

Adrians Skapars、Edoardo Manino、Youcheng Sun、Lucas C. Cordeiro

计算技术、计算机技术

Adrians Skapars,Edoardo Manino,Youcheng Sun,Lucas C. Cordeiro.GPT, But Backwards: Exactly Inverting Language Model Outputs[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2507.01693.点此复制

评论