RandAR: Decoder-only Autoregressive Visual Generation in Random Orders
RandAR: Decoder-only Autoregressive Visual Generation in Random Orders
We introduce RandAR, a decoder-only visual autoregressive (AR) model capable of generating images in arbitrary token orders. Unlike previous decoder-only AR models that rely on a predefined generation order, RandAR removes this inductive bias, unlocking new capabilities in decoder-only generation. Our essential design enables random order by inserting a "position instruction token" before each image token to be predicted, representing the spatial location of the next image token. Trained on randomly permuted token sequences -- a more challenging task than fixed-order generation, RandAR achieves comparable performance to its conventional raster-order counterpart. More importantly, decoder-only transformers trained from random orders acquire new capabilities. For the efficiency bottleneck of AR models, RandAR adopts parallel decoding with KV-Cache at inference time, enjoying 2.5x acceleration without sacrificing generation quality. Additionally, RandAR supports inpainting, outpainting and resolution extrapolation in a zero-shot manner. We hope RandAR inspires new directions for decoder-only visual generation models and broadens their applications across diverse scenarios. Our project page is at https://rand-ar.github.io/.
Fujun Luan、William T. Freeman、Ziqi Pang、Kai Zhang、Hao Tan、Tianyuan Zhang、Yunze Man、Yu-Xiong Wang
计算技术、计算机技术
Fujun Luan,William T. Freeman,Ziqi Pang,Kai Zhang,Hao Tan,Tianyuan Zhang,Yunze Man,Yu-Xiong Wang.RandAR: Decoder-only Autoregressive Visual Generation in Random Orders[EB/OL].(2025-07-08)[2025-08-02].https://arxiv.org/abs/2412.01827.点此复制
评论