|国家预印本平台
首页|Towards Vision-Language-Garment Models for Web Knowledge Garment Understanding and Generation

Towards Vision-Language-Garment Models for Web Knowledge Garment Understanding and Generation

Towards Vision-Language-Garment Models for Web Knowledge Garment Understanding and Generation

来源:Arxiv_logoArxiv
英文摘要

Multimodal foundation models have demonstrated strong generalization, yet their ability to transfer knowledge to specialized domains such as garment generation remains underexplored. We introduce VLG, a vision-language-garment model that synthesizes garments from textual descriptions and visual imagery. Our experiments assess VLG's zero-shot generalization, investigating its ability to transfer web-scale reasoning to unseen garment styles and prompts. Preliminary results indicate promising transfer capabilities, highlighting the potential for multimodal foundation models to adapt effectively to specialized domains like fashion design.

Tong Wu、Jan Ackermann、Kiyohiro Nakayama、Guandao Yang、Gordon Wetzstein

计算技术、计算机技术

Tong Wu,Jan Ackermann,Kiyohiro Nakayama,Guandao Yang,Gordon Wetzstein.Towards Vision-Language-Garment Models for Web Knowledge Garment Understanding and Generation[EB/OL].(2025-06-30)[2025-07-02].https://arxiv.org/abs/2506.05210.点此复制

评论