Moment kernels: a simple and scalable approach for equivariance to rotations and reflections in deep convolutional networks
Moment kernels: a simple and scalable approach for equivariance to rotations and reflections in deep convolutional networks
The principle of translation equivariance (if an input image is translated an output image should be translated by the same amount), led to the development of convolutional neural networks that revolutionized machine vision. Other symmetries, like rotations and reflections, play a similarly critical role, especially in biomedical image analysis, but exploiting these symmetries has not seen wide adoption. We hypothesize that this is partially due to the mathematical complexity of methods used to exploit these symmetries, which often rely on representation theory, a bespoke concept in differential geometry and group theory. In this work, we show that the same equivariance can be achieved using a simple form of convolution kernels that we call ``moment kernels,'' and prove that all equivariant kernels must take this form. These are a set of radially symmetric functions of a spatial position $x$, multiplied by powers of the components of $x$ or the identity matrix. We implement equivariant neural networks using standard convolution modules, and provide architectures to execute several biomedical image analysis tasks that depend on equivariance principles: classification (outputs are invariant under orthogonal transforms), 3D image registration (outputs transform like a vector), and cell segmentation (quadratic forms defining ellipses transform like a matrix).
Zachary Schlamowitz、Andrew Bennecke、Daniel J. Tward
医学研究方法计算技术、计算机技术生物科学研究方法、生物科学研究技术
Zachary Schlamowitz,Andrew Bennecke,Daniel J. Tward.Moment kernels: a simple and scalable approach for equivariance to rotations and reflections in deep convolutional networks[EB/OL].(2025-05-27)[2025-07-02].https://arxiv.org/abs/2505.21736.点此复制
评论