TRUST -- Transformer-Driven U-Net for Sparse Target Recovery
TRUST -- Transformer-Driven U-Net for Sparse Target Recovery
In the context of inverse problems $\bf y = Ax$, sparse recovery offers a powerful paradigm shift by enabling the stable solution of ill-posed or underdetermined systems through the exploitation of structure, particularly sparsity. Sparse regularization techniques via $\ell_0$- or $\ell_1$-norm minimization encourage solutions $\bf x$ that are both consistent with observations $\bf y$ and parsimonious in representation, often yielding physically meaningful interpretations. In this work, we address the classical inverse problem under the challenging condition where the sensing operator $\bf A$ is unknown and only a limited set of observation-target pairs $\{ \bf x,\bf y \}$ is available. We propose a novel neural architecture, TRUST, that integrates the attention mechanism of Transformers with the decoder pathway of a UNet to simultaneously learn the sensing operator and reconstruct the sparse signal. The TRUST model incorporates a Transformer-based encoding branch to capture long-range dependencies and estimate sparse support, which then guides a U-Net-style decoder to refine reconstruction through multiscale feature integration. The skip connections between the transformer stages and the decoder not only enhance image quality but also enable the decoder to access image features at different levels of abstraction. This hybrid architecture enables more accurate and robust recovery by combining global context with local details. Experimental results demonstrate that TRUST significantly outperforms traditional sparse recovery methods and standalone U-Net models, achieving superior performance in SSIM and PSNR metrics while effectively suppressing hallucination artifacts that commonly plague deep learning-based inverse solvers.
Di An、Dylan Poppert、Jiayue Li、Mark Foster、Trac D. Tran
计算技术、计算机技术
Di An,Dylan Poppert,Jiayue Li,Mark Foster,Trac D. Tran.TRUST -- Transformer-Driven U-Net for Sparse Target Recovery[EB/OL].(2025-06-01)[2025-07-16].https://arxiv.org/abs/2506.01112.点此复制
评论