|国家预印本平台
首页|AirSketch: Generative Motion to Sketch

AirSketch: Generative Motion to Sketch

AirSketch: Generative Motion to Sketch

来源:Arxiv_logoArxiv
英文摘要

Illustration is a fundamental mode of human expression and communication. Certain types of motion that accompany speech can provide this illustrative mode of communication. While Augmented and Virtual Reality technologies (AR/VR) have introduced tools for producing drawings with hand motions (air drawing), they typically require costly hardware and additional digital markers, thereby limiting their accessibility and portability. Furthermore, air drawing demands considerable skill to achieve aesthetic results. To address these challenges, we introduce the concept of AirSketch, aimed at generating faithful and visually coherent sketches directly from hand motions, eliminating the need for complicated headsets or markers. We devise a simple augmentation-based self-supervised training procedure, enabling a controllable image diffusion model to learn to translate from highly noisy hand tracking images to clean, aesthetically pleasing sketches, while preserving the essential visual cues from the original tracking data. We present two air drawing datasets to study this problem. Our findings demonstrate that beyond producing photo-realistic images from precise spatial inputs, controllable image diffusion can effectively produce a refined, clear sketch from a noisy input. Our work serves as an initial step towards marker-less air drawing and reveals distinct applications of controllable diffusion models to AirSketch and AR/VR in general.

Hui Xian Grace Lim、Xuanming Cui、Yogesh S Rawat、Ser-Nam Lim

计算技术、计算机技术自动化技术、自动化技术设备

Hui Xian Grace Lim,Xuanming Cui,Yogesh S Rawat,Ser-Nam Lim.AirSketch: Generative Motion to Sketch[EB/OL].(2025-06-28)[2025-07-16].https://arxiv.org/abs/2407.08906.点此复制

评论