|国家预印本平台
首页|PhiNet v2: A Mask-Free Brain-Inspired Vision Foundation Model from Video

PhiNet v2: A Mask-Free Brain-Inspired Vision Foundation Model from Video

PhiNet v2: A Mask-Free Brain-Inspired Vision Foundation Model from Video

来源:Arxiv_logoArxiv
英文摘要

Recent advances in self-supervised learning (SSL) have revolutionized computer vision through innovative architectures and learning objectives, yet they have not fully leveraged insights from biological visual processing systems. Recently, a brain-inspired SSL model named PhiNet was proposed; it is based on a ResNet backbone and operates on static image inputs with strong augmentation. In this paper, we introduce PhiNet v2, a novel Transformer-based architecture that processes temporal visual input (that is, sequences of images) without relying on strong augmentation. Our model leverages variational inference to learn robust visual representations from continuous input streams, similar to human visual processing. Through extensive experimentation, we demonstrate that PhiNet v2 achieves competitive performance compared to state-of-the-art vision foundation models, while maintaining the ability to learn from sequential input without strong data augmentation. This work represents a significant step toward more biologically plausible computer vision systems that process visual information in a manner more closely aligned with human cognitive processes.

Makoto Yamada、Kian Ming A. Chai、Ayoub Rhim、Satoki Ishikawa、Mohammad Sabokrou、Yao-Hung Hubert Tsai

生物科学理论、生物科学方法计算技术、计算机技术

Makoto Yamada,Kian Ming A. Chai,Ayoub Rhim,Satoki Ishikawa,Mohammad Sabokrou,Yao-Hung Hubert Tsai.PhiNet v2: A Mask-Free Brain-Inspired Vision Foundation Model from Video[EB/OL].(2025-05-16)[2025-06-12].https://arxiv.org/abs/2505.11129.点此复制

评论