Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompting with Pseudo-Code Instructions (2305.11790v3)

Published 19 May 2023 in cs.CL

Abstract: Prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of LLMs. Given the inherent ambiguity present in natural language, it is intuitive to consider the possible advantages of prompting with less ambiguous prompt styles, such as the use of pseudo-code. In this paper we explore if prompting via pseudo-code instructions helps improve the performance of pre-trained LLMs. We manually create a dataset of pseudo-code prompts for 132 different tasks spanning classification, QA and generative language tasks, sourced from the Super-NaturalInstructions dataset. Using these prompts along with their counterparts in natural language, we study their performance on two LLM families - BLOOM and CodeGen. Our experiments show that using pseudo-code instructions leads to better results, with an average increase (absolute) of 7-16 points in F1 scores for classification tasks and an improvement (relative) of 12-38% in aggregate ROUGE-L scores across all tasks. We include detailed ablation studies which indicate that code comments, docstrings, and the structural clues encoded in pseudo-code all contribute towards the improvement in performance. To the best of our knowledge our work is the first to demonstrate how pseudo-code prompts can be helpful in improving the performance of pre-trained LMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mayank Mishra (38 papers)
  2. Prince Kumar (13 papers)
  3. Riyaz Bhat (4 papers)
  4. Rudra Murthy V (9 papers)
  5. Danish Contractor (26 papers)
  6. Srikanth Tamilselvam (18 papers)
Citations (13)
Youtube Logo Streamline Icon: https://streamlinehq.com