Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models (2403.08266v1)
Abstract: While manga is a popular entertainment form, creating manga is tedious, especially adding screentones to the created sketch, namely manga screening. Unfortunately, there is no existing method that tailors for automatic manga screening, probably due to the difficulty of generating high-quality shaded high-frequency screentones. The classic manga screening approaches generally require user input to provide screentone exemplars or a reference manga image. The recent deep learning models enables the automatic generation by learning from a large-scale dataset. However, the state-of-the-art models still fail to generate high-quality shaded screentones due to the lack of a tailored model and high-quality manga training data. In this paper, we propose a novel sketch-to-manga framework that first generates a color illustration from the sketch and then generates a screentoned manga based on the intensity guidance. Our method significantly outperforms existing methods in generating high-quality manga with shaded high-frequency screentones.
- “Richness-preserving manga screening,” ACM Transactions on Graphics (TOG), vol. 27, no. 5, pp. 1–8, 2008.
- “Content-sensitive screening in black and white,” in International Conference on Computer Graphics Theory and Applications, 2011, vol. 2, pp. 166–172.
- “Mangawall: Generating manga pages for real-time applications,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 679–683.
- “Manga filling style conversion with screentone variational autoencoder,” ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1–15, 2020.
- “Generating manga from illustrations via mimicking manga creation workflow,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5642–5651.
- “Shading-guided manga screening from reference,” IEEE Transactions on Visualization and Computer Graphics, 2023.
- “Reference-based screentone transfer via pattern correspondence and regularization,” in Computer Graphics Forum, 2023.
- “High-resolution image synthesis with latent diffusion models,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10674–10685.
- “Adding conditional control to text-to-image diffusion models,” 2023.
- “Two-stage sketch colorization,” ACM Transactions on Graphics (TOG), vol. 37, no. 6, pp. 1–14, 2018.
- “Language-based colorization of scene sketches,” ACM Transactions on Graphics (TOG), vol. 38, no. 6, pp. 1–16, 2019.
- “User-guided line art flat filling with split filling mechanism,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 9889–9898.
- “Synthesis of screentone patterns of manga characters,” in 2019 IEEE international symposium on multimedia (ISM). IEEE, 2019, pp. 212–2123.
- “Designing a better asymmetric vqgan for stablediffusion,” 2023.
- “Manga109 dataset and creation of metadata,” in Proceedings of the 1st International Workshop on CoMics ANalysis, Processing and Understanding, 2016.
- “Exploiting aliasing for manga restoration,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 13405–13414.
- “Danbooru2021: A large-scale crowdsourced and tagged anime illustration dataset,” https://gwern.net/danbooru2021, 2022.
- “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
- “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018.
- “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
- “Generating manga from illustrations via mimicking manga creation workflow,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.