|国家预印本平台
首页|HOLa: Zero-Shot HOI Detection with Low-Rank Decomposed VLM Feature Adaptation

HOLa: Zero-Shot HOI Detection with Low-Rank Decomposed VLM Feature Adaptation

HOLa: Zero-Shot HOI Detection with Low-Rank Decomposed VLM Feature Adaptation

来源:Arxiv_logoArxiv
英文摘要

Zero-shot human-object interaction (HOI) detection remains a challenging task, particularly in generalizing to unseen actions. Existing methods address this challenge by tapping Vision-Language Models (VLMs) to access knowledge beyond the training data. However, they either struggle to distinguish actions involving the same object or demonstrate limited generalization to unseen classes. In this paper, we introduce HOLa (Zero-Shot HOI Detection with Low-Rank Decomposed VLM Feature Adaptation), a novel approach that both enhances generalization to unseen classes and improves action distinction. In training, HOLa decomposes VLM text features for given HOI classes via low-rank factorization, producing class-shared basis features and adaptable weights. These features and weights form a compact HOI representation that preserves shared information across classes, enhancing generalization to unseen classes. Subsequently, we refine action distinction by adapting weights for each HOI class and introducing human-object tokens to enrich visual interaction representations. To further distinguish unseen actions, we guide the weight adaptation with LLM-derived action regularization. Experimental results show that our method sets a new state-of-the-art across zero-shot HOI settings on HICO-DET, achieving an unseen-class mAP of 27.91 in the unseen-verb setting. Our code is available at https://github.com/ChelsieLei/HOLa.

Qinqian Lei、Bo Wang、Robby T. Tan

计算技术、计算机技术

Qinqian Lei,Bo Wang,Robby T. Tan.HOLa: Zero-Shot HOI Detection with Low-Rank Decomposed VLM Feature Adaptation[EB/OL].(2025-08-04)[2025-08-10].https://arxiv.org/abs/2507.15542.点此复制

评论