Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models (2309.12294v1)

Published 21 Sep 2023 in cs.CL

Abstract: LLMs have demonstrated impressive capabilities in natural language generation. However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs). This task requires the generated outputs to embody the exact semantics of LFs, without missing any LF semantics or creating any hallucinations. In this work, we tackle this issue by proposing a novel generate-and-rerank approach. Our approach involves initially generating a set of candidate outputs by prompting an LLM and subsequently reranking them using a task-specific reranker model. In addition, we curate a manually collected dataset to evaluate the alignment between different ranking metrics and human judgements. The chosen ranking metrics are utilized to enhance the training and evaluation of the reranker model. By conducting extensive experiments on three diverse datasets, we demonstrate that the candidates selected by our reranker outperform those selected by baseline methods in terms of semantic consistency and fluency, as measured by three comprehensive metrics. Our findings provide strong evidence for the effectiveness of our approach in improving the quality of generated outputs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Levon Haroutunian (3 papers)
  2. Zhuang Li (69 papers)
  3. Lucian Galescu (2 papers)
  4. Philip Cohen (5 papers)
  5. Raj Tumuluri (2 papers)
  6. Gholamreza Haffari (141 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.