Neural networks leverage nominally quantum and post-quantum representations
Neural networks leverage nominally quantum and post-quantum representations
We show that deep neural networks, including transformers and RNNs, pretrained as usual on next-token prediction, intrinsically discover and represent beliefs over 'quantum' and 'post-quantum' low-dimensional generative models of their training data -- as if performing iterative Bayesian updates over the latent state of this world model during inference as they observe more context. Notably, neural nets easily find these representation whereas there is no finite classical circuit that would do the job. The corresponding geometric relationships among neural activations induced by different input sequences are found to be largely independent of neural-network architecture. Each point in this geometry corresponds to a history-induced probability density over all possible futures, and the relative displacement of these points reflects the difference in mechanism and magnitude for how these distinct pasts affect the future.
Paul M. Riechers、Thomas J. Elliott、Adam S. Shai
计算技术、计算机技术
Paul M. Riechers,Thomas J. Elliott,Adam S. Shai.Neural networks leverage nominally quantum and post-quantum representations[EB/OL].(2025-07-13)[2025-07-21].https://arxiv.org/abs/2507.07432.点此复制
评论