Dice Question Streamline Icon: https://streamlinehq.com

Stable optimization of adversarial networks using only cGAN loss

Develop effective optimization strategies that enable stable and efficient training of adversarial networks for time-to-event modeling when using only the conditional GAN loss, without relying on auxiliary supervision losses.

Information Square Streamline Icon: https://streamlinehq.com

Background

Although the theoretical convergence of the conditional GAN objective is known, practical training instability remains a challenge, especially when relying solely on the adversarial loss. In this work, an auxiliary supervision loss is added to facilitate optimization, reflecting the broader difficulty in adversarial learning.

Solving this problem would allow adversarial time-to-event models to be trained effectively without auxiliary supervision, potentially improving generality and reducing dependence on labeled data.

References

Nevertheless, in network training, we observe that such an adversarial network is very difficult to be optimized when only using $\mathcal{L}_{\mathrm{cgan}$. This problem is still open in adversarial learning \citep{goodfellow2016nips,gui2021areview}.

AdvMIL: Adversarial Multiple Instance Learning for the Survival Analysis on Whole-Slide Images (2212.06515 - Liu et al., 2022) in Section 3.2, Adversarial multiple-instance learning, (3) Network training