Abstract
Unsupervised style transfer that supports diverse input styles using only one trained generator is a challenging and interesting task in computer vision. This paper proposes a Multi-IlluStrator Style Generative Adversarial Network (MISS GAN) that is a multi-style framework for unsupervised image-to-illustration translation, which can generate styled yet content preserving images. The illustrations dataset is a challenging one since it is comprised of illustrations of seven different illustrators, hence contains diverse styles. Existing methods require to train several generators (as the number of illustrators) to handle the different illustrators’ styles, which limits their practical usage, or require to train an image specific network, which ignores the style information provided in other images of the illustrator. MISS GAN is both input image specific and uses the information of other images using only one trained model.
| Original language | English |
|---|---|
| Pages (from-to) | 140-147 |
| Number of pages | 8 |
| Journal | Pattern Recognition Letters |
| Volume | 151 |
| DOIs | |
| State | Published - Nov 2021 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021 Elsevier B.V.
Keywords
- Generative adversarial networks
- Illustration
- Image to image translation
- Multi style transfer
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence