Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-End Fast Training of Communication Links Without a Channel Model via Online Meta-Learning (2003.01479v1)

Published 3 Mar 2020 in eess.SP, cs.IT, cs.LG, and math.IT

Abstract: When a channel model is not available, the end-to-end training of encoder and decoder on a fading noisy channel generally requires the repeated use of the channel and of a feedback link. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. In this paper, we propose to obviate the limitations of joint training via meta-learning. The proposed approach is based on a meta-training phase in which the online gradient-based meta-learning of the decoder is coupled with the joint training of the encoder via the transmission of pilots and the use of a feedback link. Accounting for channel variations during the meta-training phase, this work demonstrates the advantages of meta-learning in terms of number of pilots as compared to conventional methods when the feedback link is only available for meta-training and not at run time.

Citations (37)

Summary

We haven't generated a summary for this paper yet.