VessQC: Uncertainty-Guided 3D Curation
- VessQC is an open-source, model-agnostic tool that integrates voxel-level and topology-aware uncertainty quantification for targeted curation of large-scale 3D microscopy segmentations.
- It employs a napari-based interface featuring volume browsing, branch extraction, and editing tools to efficiently identify and correct segmentation errors.
- User studies show that VessQC achieves a 1.40× increase in error detection recall with minimal additional curation time, enhancing the reliability of biomedical imaging analyses.
VessQC is an open-source, model-agnostic tool for uncertainty-guided manual curation of large-scale 3D microscopy image segmentations, particularly designed to address error-prone outputs from state-of-the-art deep learning models in biomedical imaging. Integrating voxel-level and topology-oriented uncertainty quantification directly into a napari-based visualization and editing platform, VessQC enables targeted, efficient human-in-the-loop segmentation refinement, significantly improving error detection recall with minimal overhead in total curation time (Püttmann et al., 27 Nov 2025).
1. Motivation and Goals
Three-dimensional microscopy datasets, especially those visualizing vascular networks, present substantial challenges for automated segmentation due to inherent variability in image contrast, sample morphology, and acquisition artifacts. Even advanced deep learning models frequently yield segmentation errors such as false merges (erroneous joining of distinct structures) or erroneous breaks (artificial fragmentation), undermining reproducible downstream analyses and biological interpretation. Manual annotation remains the gold standard for error correction and training data creation but is prohibitively labor-intensive for large 3D volumes. VessQC addresses this bottleneck by integrating quantitative uncertainty maps to direct expert attention to regions most likely to contain substantive errors, thereby bridging the gap between model-driven uncertainty quantification and practical, scalable human curation.
2. Uncertainty Estimation Integration
VessQC does not implement new prediction models, but serves as a consumer and visualizer of externally computed uncertainty maps. It supports two principal uncertainty estimation classes:
- Pixel-wise (Predictive) Uncertainty: Standard approaches such as Monte Carlo dropout or ensemble inference at test time are used to generate sets of per-voxel class probabilities. From these, predictive mean, entropy, mutual information (model vs. data uncertainty), and variance are computed as follows:
- Predictive mean for class at voxel : .
- Entropy: .
- Mutual Information: .
- Variance (foreground): .
- Topology-Aware Uncertainty: Building on connectivity-perturbation methodologies (notably Gupta et al., NeurIPS 2023), this metric quantifies the impact of local perturbations on the global graph topology of the foreground segmentation, producing a scalar field that highlights voxels where edits may induce significant splits or merges.
Both uncertainty volumes are pre-computed via external scripts and loaded into VessQC for visualization and spatial prioritization.
3. Software Architecture and User Interface
VessQC is implemented as a Python napari plugin, leveraging its N-dimensional rendering and widget system. The design is model-agnostic, requiring only three files per volume: the raw image (e.g., TIFF/OME-TIFF), the predicted segmentation mask (binary/integer volume), and at least one uncertainty map (float32 volume).
The main user interface is composed of four modules:
- Volume Browser: Tracks all currently loaded volumes, displays summary statistics (such as maximum uncertainty per spatial component), and supports sorting/ranking based on uncertainty.
- Branch Extractor: Identifies connected components ("branches") in the uncertainty map exceeding a user-set threshold, enabling sequential inspection.
- Visualization Panel: Offers both 2D slice and 3D rendering modes, compositing raw grayscale image data, red segmentation overlays, and semi-transparent yellow-to-red uncertainty heatmaps.
- Editing Tools: Integrates napari’s painting, erasing, and thresholding utilities allowing direct segmentation correction. A commit action (“Done” button) saves edits and advances to the next high-uncertainty region.
Immediate, responsive feedback is provided during editing, with uncertainty overlays optionally recomputed after each correction.
4. Quantitative Performance and User Study
A preliminary controlled user paper employing 100³-voxel light-sheet microscopy volumes (CD31-labeled mouse brain) compared manual and VessQC-guided curation:
| Metric | Manual | VessQC-guided | Significance |
|---|---|---|---|
| Error detection recall | |||
| Curation time per volume | min | min | |
| Time per correction | s | s | — |
| False-positive edits | Not specified | None observed | — |
VessQC-guided curation yielded an approximately increase in error detection recall with only a non-significant increase in total curation time per volume. The nearly identical time-per-correction values indicate that the extra overhead is attributable to a greater number of found errors, not slower editing. No false-positive corrections were observed in the uncertainty-guided workflow. These results underscore the system’s effectiveness for scalable human-in-the-loop correction with minimal risk of over-correction (Püttmann et al., 27 Nov 2025).
5. Human-in-the-Loop Correction Workflow
The VessQC curation protocol consists of the following sequence:
- Load the raw 3D volume, predicted segmentation mask, and corresponding uncertainty map(s).
- VessQC automatically clusters and ranks connected uncertain regions (“branches”) for prioritized review.
- The user sequentially inspects each branch using both 2D and 3D visualization modes, comparing candidate segmentation against the native image.
- Standard editing tools are used to rectify errors—removing falsely merged regions or completing missing structures.
- On committing each correction, VessQC advances to the next branch. Corrected segmentations can be exported for downstream quantitative analysis (e.g., morphometrics) or re-used in an iterative model retraining pipeline to further improve model accuracy.
6. Supported Data Formats and Practical Usage
VessQC supports 3D volumes in TIFF, OME-TIFF, or any file formats supported by tifffile, with segmentation masks as integer/binary images and uncertainty maps as single-channel float32 volumes. The system is compatible with Python ( 3.8), napari ( 0.4.x), and standard scientific Python dependencies. Interactive curation has been tested on datasets of up to voxels on typical desktop-class hardware. Both branch extraction and region ranking scale linearly with voxel count, and recommended practice for very large volumes includes pre-cropping or chunked data processing.
Typical installation venues include PyPI, napari hub, and direct git-based source installs. Supplementary scripts are available for model training and uncertainty computation.
7. Broader Significance and Extensibility
By operationalizing established uncertainty assessment techniques within an interactive, extensible visualization and editing framework, VessQC represents a practical advancement in the pipeline for high-fidelity volumetric bioimage analysis. The model-agnostic design, simple I/O specification, and integration with open tools such as napari ensure applicability across diverse datasets and imaging modalities. A plausible implication is that the methodology underlying VessQC can be adapted beyond vascular segmentation to any volumetric domain in which uncertainty-guided human curation is required for generating or verifying high-quality annotation data (Püttmann et al., 27 Nov 2025).