|国家预印本平台
首页|Connecting Neural Models Latent Geometries with Relative Geodesic Representations

Connecting Neural Models Latent Geometries with Relative Geodesic Representations

Connecting Neural Models Latent Geometries with Relative Geodesic Representations

来源:Arxiv_logoArxiv
英文摘要

Neural models learn representations of high-dimensional data on low-dimensional manifolds. Multiple factors, including stochasticities in the training process, model architectures, and additional inductive biases, may induce different representations, even when learning the same task on the same data. However, it has recently been shown that when a latent structure is shared between distinct latent spaces, relative distances between representations can be preserved, up to distortions. Building on this idea, we demonstrate that exploiting the differential-geometric structure of latent spaces of neural models, it is possible to capture precisely the transformations between representational spaces trained on similar data distributions. Specifically, we assume that distinct neural models parametrize approximately the same underlying manifold, and introduce a representation based on the pullback metric that captures the intrinsic structure of the latent space, while scaling efficiently to large models. We validate experimentally our method on model stitching and retrieval tasks, covering autoencoders and vision foundation discriminative models, across diverse architectures, datasets, and pretraining schemes.

Hanlin Yu、Berfin Inal、Georgios Arvanitidis、Soren Hauberg、Francesco Locatello、Marco Fumero

计算技术、计算机技术

Hanlin Yu,Berfin Inal,Georgios Arvanitidis,Soren Hauberg,Francesco Locatello,Marco Fumero.Connecting Neural Models Latent Geometries with Relative Geodesic Representations[EB/OL].(2025-06-02)[2025-06-27].https://arxiv.org/abs/2506.01599.点此复制

评论