Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Preference Score: Better Aligning Text-to-Image Models with Human Preference (2303.14420v2)

Published 25 Mar 2023 in cs.CV and cs.AI

Abstract: Recent years have witnessed a rapid growth of deep generative models, with text-to-image models gaining significant attention from the public. However, existing models often generate images that do not align well with human preferences, such as awkward combinations of limbs and facial expressions. To address this issue, we collect a dataset of human choices on generated images from the Stable Foundation Discord channel. Our experiments demonstrate that current evaluation metrics for generative models do not correlate well with human choices. Thus, we train a human preference classifier with the collected dataset and derive a Human Preference Score (HPS) based on the classifier. Using HPS, we propose a simple yet effective method to adapt Stable Diffusion to better align with human preferences. Our experiments show that HPS outperforms CLIP in predicting human choices and has good generalization capability toward images generated from other models. By tuning Stable Diffusion with the guidance of HPS, the adapted model is able to generate images that are more preferred by human users. The project page is available here: https://tgxs002.github.io/align_sd_web/ .

Human Preference Score: Better Aligning Text-to-Image Models with Human Preference

In the paper "Human Preference Score: Better Aligning Text-to-Image Models with Human Preference," Wu et al. introduce a novel approach to align text-to-image generative models, specifically Stable Diffusion, with human preferences. The authors identify a gap in existing evaluation metrics for generative models like Inception Score (IS) and Fréchet Inception Distance (FID), which fail to correlate well with human choices. They propose the Human Preference Score (HPS) as a new metric better reflecting human preferences in the generated images.

Introduction and Context

Recent advancements in diffusion models have significantly improved text-to-image generation, with models like DALL·E and Stable Diffusion gaining substantial traction. Despite this, existing models occasionally produce images that misalign with human preferences, often resulting in unnatural or awkward visual elements. The authors argue that current metrics such as IS and FID inadequately capture human preferences due to biases from their reliance on ImageNet-trained classifiers and their single-modal nature, which neglects user intention embedded in prompts.

Human Preference Dataset

The authors construct a substantial dataset sourcing generated images and corresponding human preferences from the Stable Foundation Discord channel. This dataset consists of 98,807 images generated from stable prompts, enriched by 25,205 human choices. This robust, large-scale dataset provides a foundation for evaluating the correlation of various metrics with human preference.

Human Preference Score: Methodology and Results

Leveraging the dataset, the authors train a human preference classifier by fine-tuning the CLIP model. This classifier serves as the basis for computing the Human Preference Score (HPS), designed to assess the alignment of generated images with human preferences. The classifier demonstrated high accuracy in the human choice prediction task, significantly outperforming the pre-existing CLIP model.

HPS's reliability extends across images generated by various models, as substantiated by user studies where it maintains strong agreement with human judgment. The trained classifier highlights HPS's ability to generalize beyond specific model outputs, offering a measure that better captures aesthetic qualities over mere factual alignment of text and image content.

Improving Stable Diffusion with Human Preference Guidance

Using HPS, Wu et al. propose a mechanism to adapt Stable Diffusion to generate images better aligned with human preferences. They address the misalignment problem by constructing a modified dataset of preferred and non-preferred images identified by HPS, utilizing this dataset to fine-tune the Stable Diffusion model via Low-Rank Adaptation (LoRA). This adaptation incorporates awareness of aesthetic alignment, marking a distinct improvement over prior iterations.

In practical assessments, images generated through the adapted model exhibited superior quality, much closer to human expression preferences with minimal artifacts. These qualitative improvements were quantitatively supported by user studies illustrating enhanced user satisfaction.

Implications and Future Directions

The research implies substantial progress in aligning generative models with human preferences, suggesting broader utility for HPS in refining AI outputs to meet aesthetic and artistic standards often overlooked by traditional metrics. Future work could pivot on broadening the diverse dataset scope, reducing bias intrinsic in the collected preferences, and perhaps applying the HPS methodology to other modalities beyond image generation.

Integrating subjective human elements into AI model evaluation—and iterative refinement as demonstrated—opens pathways for applications where human satisfaction and subjective quality assessments play critical roles. This work sets a promising horizon for future generational improvements in AI, urging further exploration into human-centered metrics and adaptive methodologies in diverse AI applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiaoshi Wu (10 papers)
  2. Keqiang Sun (20 papers)
  3. Feng Zhu (139 papers)
  4. Rui Zhao (241 papers)
  5. Hongsheng Li (340 papers)
Citations (94)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets