Dice Question Streamline Icon: https://streamlinehq.com

Prompt design effects on LLM compliance

Determine whether particular zero-shot prompt design choices reliably increase or decrease the compliance of large language models with input instructions when performing text annotation tasks.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper motivates its large-scale paper by noting a lack of systematic evidence on how prompt design affects compliance—whether models follow the requested output format and label constraints. Prior observations showed instances of non-compliance (e.g., generating labels outside the specified set), but general patterns remained unclear.

Establishing reliable prompt design principles for compliance is important for cost-effective and scalable annotation workflows in computational social science, where invalid outputs increase processing costs and can bias results.

References

In the absence of any systemic evidence, however, it remains unclear whether certain prompt designs are more or less likely to generate compliant outputs.

Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways (2406.11980 - Atreja et al., 17 Jun 2024) in Section 1 (Introduction)