Gen-AFFECT: Generation of Avatar Fine-grained Facial Expressions with Consistent identiTy
Gen-AFFECT: Generation of Avatar Fine-grained Facial Expressions with Consistent identiTy
Different forms of customized 2D avatars are widely used in gaming applications, virtual communication, education, and content creation. However, existing approaches often fail to capture fine-grained facial expressions and struggle to preserve identity across different expressions. We propose GEN-AFFECT, a novel framework for personalized avatar generation that generates expressive and identity-consistent avatars with a diverse set of facial expressions. Our framework proposes conditioning a multimodal diffusion transformer on an extracted identity-expression representation. This enables identity preservation and representation of a wide range of facial expressions. GEN-AFFECT additionally employs consistent attention at inference for information sharing across the set of generated expressions, enabling the generation process to maintain identity consistency over the array of generated fine-grained expressions. GEN-AFFECT demonstrates superior performance compared to previous state-of-the-art methods on the basis of the accuracy of the generated expressions, the preservation of the identity and the consistency of the target identity across an array of fine-grained facial expressions.
Hao Yu、Rupayan Mallick、Margrit Betke、Sarah Adel Bargal
计算技术、计算机技术
Hao Yu,Rupayan Mallick,Margrit Betke,Sarah Adel Bargal.Gen-AFFECT: Generation of Avatar Fine-grained Facial Expressions with Consistent identiTy[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.09461.点此复制
评论