Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fake Sentence Detection as a Training Task for Sentence Encoding (1808.03840v4)

Published 11 Aug 2018 in cs.CL

Abstract: Sentence encoders are typically trained on LLMing tasks with large unlabeled datasets. While these encoders achieve state-of-the-art results on many sentence-level tasks, they are difficult to train with long training cycles. We introduce fake sentence detection as a new training task for learning sentence encoders. We automatically generate fake sentences by corrupting original sentences from a source collection and train the encoders to produce representations that are effective at detecting fake sentences. This binary classification task turns to be quite efficient for training sentence encoders. We compare a basic BiLSTM encoder trained on this task with a strong sentence encoding models (Skipthought and FastSent) trained on a LLMing task. We find that the BiLSTM trains much faster on fake sentence detection (20 hours instead of weeks) using smaller amounts of data (1M instead of 64M sentences). Further analysis shows the learned representations capture many syntactic and semantic properties expected from good sentence representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Viresh Ranjan (10 papers)
  2. Heeyoung Kwon (8 papers)
  3. Niranjan Balasubramanian (53 papers)
  4. Minh Hoai (48 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.