|国家预印本平台
首页|Unsupervised Image-to-Image Translation with Generative Adversarial Networks

Unsupervised Image-to-Image Translation with Generative Adversarial Networks

Unsupervised Image-to-Image Translation with Generative Adversarial Networks

来源:Arxiv_logoArxiv
英文摘要

It's useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the "image-to-image translation" problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation

Chao Wu、Paarth Neekhara、Hao Dong、Yike Guo

计算技术、计算机技术

Chao Wu,Paarth Neekhara,Hao Dong,Yike Guo.Unsupervised Image-to-Image Translation with Generative Adversarial Networks[EB/OL].(2017-01-10)[2025-07-20].https://arxiv.org/abs/1701.02676.点此复制

评论