Robust Generation of Channel Distributions with Diffusion Models

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Abstract

Training neural encoders requires a differentiable channel model for backpropagation. This can be bypassed by approximating the channel distribution using pilot signals. A common method for this is the use of generative adversarial networks (GANs). In this paper, we introduce diffusion models (DMs) for channel generation and propose an efficient training algorithm. Our DMs provide a solution that achieves near-optimal end-to-end symbol error rates (SERs). Importantly, DMs outperform GANs in high signal-to-noise ratio regions. Here, in particular, we explore the trade-off between sample quality and speed. We also show that the right noise scheduling can significantly reduce sampling time with a minor increase in SER.

Details

Original languageEnglish
Title of host publicationICC 2024 - IEEE International Conference on Communications
EditorsMatthew Valenti, David Reed, Melissa Torres
Pages330-335
Number of pages6
ISBN (electronic)978-1-7281-9054-9
Publication statusPublished - 2024
Peer-reviewedYes

External IDs

ORCID /0000-0002-1702-9075/work/183166108
Scopus 85202837903
Mendeley 6a697f58-0a0d-3205-a52b-054d9f81a740

Keywords

Keywords

  • Channel generation, diffusion model, end-to-end learning, generative networks