Conditioning Convolutional Segmentation Architectures with Non-Imaging Data (1907.12330v1)
Abstract: We compare two conditioning mechanisms based on concatenation and feature-wise modulation to integrate non-imaging information into convolutional neural networks for segmentation of anatomical structures. As a proof-of-concept we provide the distribution of class labels obtained from ground truth masks to ensure strong correlation between the conditioning data and the segmentation maps. We evaluate the methods on the ACDC dataset, and show that conditioning with non-imaging data improves performance of the segmentation networks. We observed conditioning the U-Net architectures was challenging, where no method gave significant improvement. However, the same architecture without skip connections outperforms the baseline with feature-wise modulation, and the relative performance increases as the training size decreases.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.