Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Near-imperceptible Neural Linguistic Steganography via Self-Adjusting Arithmetic Coding (2010.00677v1)

Published 1 Oct 2020 in cs.CL and cs.CR

Abstract: Linguistic steganography studies how to hide secret messages in natural language cover texts. Traditional methods aim to transform a secret message into an innocent text via lexical substitution or syntactical modification. Recently, advances in neural LLMs (LMs) enable us to directly generate cover text conditioned on the secret message. In this study, we present a new linguistic steganography method which encodes secret messages using self-adjusting arithmetic coding based on a neural LLM. We formally analyze the statistical imperceptibility of this method and empirically show it outperforms the previous state-of-the-art methods on four datasets by 15.3% and 38.9% in terms of bits/word and KL metrics, respectively. Finally, human evaluations show that 51% of generated cover texts can indeed fool eavesdroppers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiaming Shen (56 papers)
  2. Heng Ji (266 papers)
  3. Jiawei Han (263 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.