Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sakuga-42M Dataset: Scaling Up Cartoon Research (2405.07425v1)

Published 13 May 2024 in cs.CV

Abstract: Hand-drawn cartoon animation employs sketches and flat-color segments to create the illusion of motion. While recent advancements like CLIP, SVD, and Sora show impressive results in understanding and generating natural video by scaling large models with extensive datasets, they are not as effective for cartoons. Through our empirical experiments, we argue that this ineffectiveness stems from a notable bias in hand-drawn cartoons that diverges from the distribution of natural videos. Can we harness the success of the scaling paradigm to benefit cartoon research? Unfortunately, until now, there has not been a sizable cartoon dataset available for exploration. In this research, we propose the Sakuga-42M Dataset, the first large-scale cartoon animation dataset. Sakuga-42M comprises 42 million keyframes covering various artistic styles, regions, and years, with comprehensive semantic annotations including video-text description pairs, anime tags, content taxonomies, etc. We pioneer the benefits of such a large-scale cartoon dataset on comprehension and generation tasks by finetuning contemporary foundation models like Video CLIP, Video Mamba, and SVD, achieving outstanding performance on cartoon-related tasks. Our motivation is to introduce large-scaling to cartoon research and foster generalization and robustness in future cartoon applications. Dataset, Code, and Pretrained Models will be publicly available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  2. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023.
  3. Sora. https://openai.com/sora. Accessed: 2024-5-12.
  4. Deep animation video interpolation in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6587–6595, 2021.
  5. Deep geometrized cartoon line inbetweening. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7291–7300, 2023.
  6. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023.
  7. Videomamba: State space model for efficient video understanding. arXiv preprint arXiv:2403.06977, 2024.
  8. Pika. https://pika.art/. Accessed: 2024-5-3.
  9. Gen-2. https://research.runwayml.com/gen2. Accessed: 2024-5-3.
  10. Learning inclusion matching for animation paint bucket colorization. CVPR, 2024.
  11. Joint stroke tracing and correspondence for 2d animation. ACM Trans. Graph., 43(3), apr 2024.
  12. The animation transformer: Visual correspondence via segment matching. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11323–11332, 2021.
  13. Sprite-from-sprite: Cartoon animation decomposition with self-supervised sprite estimation. ACM Trans. Graph., 41(6), nov 2022.
  14. Re: Draw–context aware translation as a controllable method for artistic production. arXiv preprint arXiv:2401.03499, 2024.
  15. Toonsynth: example-based synthesis of hand-colored cartoon animations. ACM Transactions on Graphics (TOG), 37(4):1–11, 2018.
  16. Globally optimal toon tracking. ACM Transactions on Graphics (TOG), 35(4):1–10, 2016.
  17. Stereoscopizing cel animations. ACM Transactions on Graphics (TOG), 32(6):1–10, 2013.
  18. Dilight: Digital light table–inbetweening for 2d animations using guidelines. Computers & Graphics, 65:31–44, 2017.
  19. Exploring inbetween charts with trajectory-guided sliders for cutout animation. Multimedia Tools and Applications, pages 1–14, 2023.
  20. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023.
  21. Animate anyone: Consistent and controllable image-to-video synthesis for character animation. arXiv preprint arXiv:2311.17117, 2023.
  22. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. arXiv preprint arXiv:2402.19479, 2024.
  23. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1728–1738, 2021.
  24. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022.
  25. Pyscenedetect. https://github.com/Breakthrough/PySceneDetect. Accessed: 2024-5-12.
  26. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744, 2023.
  27. Share captioner. https://huggingface.co/Lin-Chen/ShareCaptioner. Accessed: 2024-5-12.
  28. Danbooru2021. https://gwern.net/danbooru2021. Accessed: 2024-5-12.
  29. Waifu dataset. https://github.com/thewaifuproject/waifu-dataset. Accessed: 2024-5-12.
  30. wd14-swin-v2. https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2. Accessed: 2024-5-12.
  31. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023.
  32. chatgpt. https://chatgpt.com/. Accessed: 2024-5-12.
  33. Dall-e3. https://openai.com/dall-e-3. Accessed: 2024-5-12.
  34. cafe-aesthetic-model. https://huggingface.co/cafeai/cafe_aesthetic. Accessed: 2024-5-12.
  35. manga-image-translator. https://github.com/zyddnys/manga-image-translator. Accessed: 2024-5-12.
  36. Learning audio-video modalities from image captions. In European Conference on Computer Vision, pages 407–426. Springer, 2022.
  37. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3558–3568, 2021.
  38. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  39. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73, 2017.
  40. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011.
  41. Align your latents: High-resolution video synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22563–22575, 2023.
  42. High-resolution image synthesis with latent diffusion models. 2022 ieee. In CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, 2021.
  43. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  44. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 4296–4304, 2024.
  45. Gpt-4v. https://openai.com/research/gpt-4v-system-card. Accessed: 2024-5-12.
  46. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023.
  47. Efficient in-context learning in vision-language models for egocentric videos. arXiv preprint arXiv:2311.17041, 2023.
  48. Manga line extraction. https://github.com/ljsabc/MangaLineExtraction_PyTorch. Accessed: 2024-5-12.
  49. Anime2sketch. https://github.com/Mukosame/Anime2Sketch. Accessed: 2024-5-12.
  50. Automatic temporally coherent video colorization. In 2019 16th conference on computer and robot vision (CRV), pages 189–194. IEEE, 2019.
  51. Optical flow based line drawing frame interpolation using distance transform to support inbetweenings. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4200–4204. IEEE, 2019.
  52. Deep sketch-guided cartoon video inbetweening. IEEE Transactions on Visualization and Computer Graphics, 28(8):2938–2952, 2021.
  53. Ldmvfi: Video frame interpolation with latent diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 1472–1480, 2024.
  54. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint arXiv:2311.04145, 2023.
  55. Wenhao Wang and Yi Yang. Vidprom: A million-scale real prompt-gallery dataset for text-to-video diffusion models. arXiv preprint arXiv:2403.06098, 2024.
Citations (1)

Summary

  • The paper introduces a 42M keyframe dataset that addresses the challenge of limited cartoon data for training advanced animation models.
  • The methodology employs an automated pipeline for collection, keyframe detection, and detailed captioning to ensure data quality and adaptability.
  • The paper demonstrates marked improvements in text-to-video retrieval and animation generation metrics when fine-tuning vision-language models on Sakuga-42M.

Unveiling Sakuga-42M: Bridging the Gap in Cartoon Research with Large-Scale Data

Opening the Doors to Cartoon Animation Research

Hand-drawn cartoons have been enchanting us for over a century, from early animations like "Humorous Phases of Funny Faces" to the latest anime styles. Yet, creating these beautiful animations involves a lot of manual and repetitive work. Imagine the tasks—storyboarding, sketching, inbetweening, and coloring—each demanding immense effort and time. Even with advancements in computer vision, automating these processes across various styles has proven difficult.

Recent large-scale models like CLIP, Stable Video Diffusion (SVD), and Sora have demonstrated impressive capabilities in understanding and generating natural videos. However, they fall short when applied to the cartoon domain due to the distinct nature of hand-drawn animations. This discrepancy stems from the lack of a sizable, quality cartoon dataset for training these models. Enter Sakuga-42M: a game-changing dataset with 42 million keyframes, designed specifically to cater to cartoon animation research.

The Sakuga-42M Dataset: A Deep Dive

A Rich, Diverse Resource

Sakuga-42M is a treasure trove for researchers, encompassing a wide range of artistic styles, regions, and time periods. The dataset includes videos with comprehensive annotations, such as video-text pairs, anime tags, and content taxonomies. This richness paves the way for more generalized and robust cartoon research.

The dataset preparation involves an automatic pipeline for collecting, splitting, keyframe detection, and captioning. This modular approach ensures adaptability and longevity, allowing the integration of new tools as they become available.

Numerical Scale and Quality

The numbers are staggering: 42 million keyframes from 1.4 million video clips. These keyframes come from various animation stages—rough drawings, tie-downs, finished products—and cover different resolutions and quality metrics. For example, the majority of clips rate high in aesthetic value and dynamic movement.

Captions and Annotations

One standout feature is the detailed captioning process. The pipeline uses specialized anime tagging models and LLMs to generate rich and coherent text descriptions. This additional layer of semantic depth boosts the effectiveness of vision-LLMs for cartoon data.

Applying Sakuga-42M

Sakuga-42M demonstrates substantial improvements in both understanding and generating cartoons.

Understanding Tasks

To gauge the dataset's utility, the researchers fine-tuned existing vision-LLMs (like Video CLIP and Video Mamba). These finetuned models showed remarkable improvements in zero-shot text-to-video and video-to-text retrieval tasks. For instance, the VideoMamba model showed a nearly tripled accuracy in cross-referencing text and video components after being trained on Sakuga-42M.

Generating Cartoons

When it comes to generating cartoons, the improvements are equally impressive. Fine-tuning the SVD model on Sakuga-42M resulted in better stability, dynamics, and overall quality in generated animations. Metrics such as Inception Score (IS), Fréchet Video Distance (FVD), and frame-wise CLIP similarity showed significant improvements, underscoring the practical benefits of this dataset.

Broad Implications and Future Potential

The impact of Sakuga-42M extends far beyond immediate applications in understanding and generating cartoons. Here are a few broader implications and future possibilities:

  • Cartoon Generation: Seamless creation of new cartoon videos based on text prompts or user input.
  • Automatic Colorization: Enhanced colorization algorithms using extensive supervised image pairs from Sakuga-42M.
  • Automatic Inbetweening: Improved automation in generating inbetween frames, thanks to the rich dataset of rough sketches and keyframes.
  • Video Retrieval Systems: More effective search systems enabling animators to find specific references effortlessly.
  • Cartoon Understanding: Higher accuracy in captioning, scene interpretation, and dialog systems for animations, leveraging the domain-specific richness of Sakuga-42M.
  • Automatic Editing: Refined tools for editing and adjusting cartoon animations with greater flexibility and accuracy.

Conclusion

The Sakuga-42M dataset serves as a significant leap forward in cartoon research. Its extensive collection of keyframes, rich annotations, and comprehensive captions provide an invaluable resource for numerous applications. As researchers continue to explore and expand upon this dataset, the future of automated cartoon animation looks brighter than ever. Whether you're involved in developing new AI models or creating stunning animations, Sakuga-42M is set to be an essential tool in your toolbox.

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

Reddit Logo Streamline Icon: https://streamlinehq.com