|国家预印本平台
首页|Is Visual in-Context Learning for Compositional Medical Tasks within Reach?

Is Visual in-Context Learning for Compositional Medical Tasks within Reach?

Is Visual in-Context Learning for Compositional Medical Tasks within Reach?

来源:Arxiv_logoArxiv
英文摘要

In this paper, we explore the potential of visual in-context learning to enable a single model to handle multiple tasks and adapt to new tasks during test time without re-training. Unlike previous approaches, our focus is on training in-context learners to adapt to sequences of tasks, rather than individual tasks. Our goal is to solve complex tasks that involve multiple intermediate steps using a single model, allowing users to define entire vision pipelines flexibly at test time. To achieve this, we first examine the properties and limitations of visual in-context learning architectures, with a particular focus on the role of codebooks. We then introduce a novel method for training in-context learners using a synthetic compositional task generation engine. This engine bootstraps task sequences from arbitrary segmentation datasets, enabling the training of visual in-context learners for compositional tasks. Additionally, we investigate different masking-based training objectives to gather insights into how to train models better for solving complex, compositional tasks. Our exploration not only provides important insights especially for multi-modal medical task sequences but also highlights challenges that need to be addressed.

Simon Reiß、Zdravko Marinov、Alexander Jaus、Constantin Seibold、M. Saquib Sarfraz、Erik Rodner、Rainer Stiefelhagen

医学研究方法计算技术、计算机技术

Simon Reiß,Zdravko Marinov,Alexander Jaus,Constantin Seibold,M. Saquib Sarfraz,Erik Rodner,Rainer Stiefelhagen.Is Visual in-Context Learning for Compositional Medical Tasks within Reach?[EB/OL].(2025-07-02)[2025-07-21].https://arxiv.org/abs/2507.00868.点此复制

评论