Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction (2108.03798v2)

Published 9 Aug 2021 in cs.CV

Abstract: Neural painting refers to the procedure of producing a series of strokes for a given image and non-photo-realistically recreating it using neural networks. While reinforcement learning (RL) based agents can generate a stroke sequence step by step for this task, it is not easy to train a stable RL agent. On the other hand, stroke optimization methods search for a set of stroke parameters iteratively in a large search space; such low efficiency significantly limits their prevalence and practicality. Different from previous methods, in this paper, we formulate the task as a set prediction problem and propose a novel Transformer-based framework, dubbed Paint Transformer, to predict the parameters of a stroke set with a feed forward network. This way, our model can generate a set of strokes in parallel and obtain the final painting of size 512 * 512 in near real time. More importantly, since there is no dataset available for training the Paint Transformer, we devise a self-training pipeline such that it can be trained without any off-the-shelf dataset while still achieving excellent generalization capability. Experiments demonstrate that our method achieves better painting performance than previous ones with cheaper training and inference costs. Codes and models are available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Songhua Liu (33 papers)
  2. Tianwei Lin (42 papers)
  3. Dongliang He (46 papers)
  4. Fu Li (86 papers)
  5. Ruifeng Deng (1 paper)
  6. Xin Li (980 papers)
  7. Errui Ding (156 papers)
  8. Hao Wang (1120 papers)
Citations (67)

Summary

We haven't generated a summary for this paper yet.