Papers
Topics
Authors
Recent
2000 character limit reached

A Theoretically-Grounded Codebook for Digital Semantic Communications (2510.07108v1)

Published 8 Oct 2025 in cs.IT and math.IT

Abstract: The use of a learnable codebook provides an efficient way for semantic communications to map vector-based high-dimensional semantic features onto discrete symbol representations required in digital communication systems. In this paper, the problem of codebook-enabled quantization mapping for digital semantic communications is studied from the perspective of information theory. Particularly, a novel theoretically-grounded codebook design is proposed for jointly optimizing quantization efficiency, transmission efficiency, and robust performance. First, a formal equivalence is established between the one-to-many synonymous mapping defined in semantic information theory and the many-to-one quantization mapping based on the codebook's Voronoi partitions. Then, the mutual information between semantic features and their quantized indices is derived in order to maximize semantic information carried by discrete indices. To realize the semantic maximum in practice, an entropy-regularized quantization loss based on empirical estimation is introduced for end-to-end codebook training. Next, the physical channel-induced semantic distortion and the optimal codebook size for semantic communications are characterized under bit-flip errors and semantic distortion. To mitigate the semantic distortion caused by physical channel noise, a novel channel-aware semantic distortion loss is proposed. Simulation results on image reconstruction tasks demonstrate the superior performance of the proposed theoretically-grounded codebook that achieves a 24.1% improvement in peak signal-to-noise ratio (PSNR) and a 46.5% improvement in learned perceptual image patch similarity (LPIPS) compared to the existing codebook designs when the signal-to-noise ratio (SNR) is 10 dB.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.