Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study on Learning and Improving the Search Objective for Unsupervised Paraphrasing (2203.12106v1)

Published 23 Mar 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Research in unsupervised text generation has been gaining attention over the years. One recent approach is local search towards a heuristically defined objective, which specifies language fluency, semantic meanings, and other task-specific attributes. Search in the sentence space is realized by word-level edit operations including insertion, replacement, and deletion. However, such objective function is manually designed with multiple components. Although previous work has shown maximizing this objective yields good performance in terms of true measure of success (i.e. BLEU and iBLEU), the objective landscape is considered to be non-smooth with significant noises, posing challenges for optimization. In this dissertation, we address the research problem of smoothing the noise in the heuristic search objective by learning to model the search dynamics. Then, the learned model is combined with the original objective function to guide the search in a bootstrapping fashion. Experimental results show that the learned models combined with the original search objective can indeed provide a smoothing effect, improving the search performance by a small margin.

Summary

We haven't generated a summary for this paper yet.