Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation (2305.01569v2)
Abstract: The ability to collect a large dataset of human preferences from text-to-image users is usually limited to companies, making such datasets inaccessible to the public. To address this issue, we create a web app that enables text-to-image users to generate images and specify their preferences. Using this web app we build Pick-a-Pic, a large, open dataset of text-to-image prompts and real users' preferences over generated images. We leverage this dataset to train a CLIP-based scoring function, PickScore, which exhibits superhuman performance on the task of predicting human preferences. Then, we test PickScore's ability to perform model evaluation and observe that it correlates better with human rankings than other automatic evaluation metrics. Therefore, we recommend using PickScore for evaluating future text-to-image generation models, and using Pick-a-Pic prompts as a more relevant dataset than MS-COCO. Finally, we demonstrate how PickScore can enhance existing text-to-image models via ranking.
- Training a helpful and harmless assistant with reinforcement learning from human feedback. ArXiv, abs/2204.05862, 2022.
- Microsoft coco captions: Data collection and evaluation server. ArXiv, abs/1504.00325, 2015.
- Deep reinforcement learning from human preferences. ArXiv, abs/1706.03741, 2017.
- Arpad E. Elo. The rating of chessplayers, past and present. 1978.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
- Jonathan Ho. Classifier-free diffusion guidance. ArXiv, abs/2207.12598, 2022.
- Openclip, July 2021. If you use this software, please cite it as below.
- Aligning text-to-image models using human feedback. ArXiv, abs/2302.12192, 2023.
- Microsoft coco: Common objects in context. In ECCV, 2014.
- Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022.
- Simulacra aesthetic captions. Technical Report Version 1.0, Stability AI, 2022. url https://github.com/JD-P/simulacra-aesthetic-captions .
- Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021.
- High-resolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, 2021.
- LAION-5b: An open large-scale dataset for training next generation image-text models. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
- Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, 2015.
- Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. ArXiv, abs/2210.14896, 2022.
- Better aligning text-to-image models with human preference. ArXiv, abs/2303.14420, 2023.
- Imagereward: Learning and evaluating human preferences for text-to-image generation. ArXiv, abs/2304.05977, 2023.
- Yuval Kirstain (10 papers)
- Adam Polyak (29 papers)
- Uriel Singer (20 papers)
- Shahbuland Matiana (4 papers)
- Joe Penna (2 papers)
- Omer Levy (70 papers)