Vitessce Visualizations
- Vitessce Visualizations are interactive tools that enable coordinated exploration of single-cell and spatial omics data, integrating 2D, 3D, and mixed reality views.
- They employ a unified declarative JSON schema to streamline data ingestion, state coordination, and cross-panel interactions for dynamic analysis.
- Integrations like EasyVitessce and Vitessce Link automate the transition from static plots and enable synchronized multi-device, mixed reality exploration.
Vitessce Visualizations enable highly interactive, linked exploration of single-cell and spatial omics data in both two-dimensional (2D) and three-dimensional (3D) views, spanning use cases from computational notebook environments to fully synchronized mixed-reality multi-user analysis. Rooted in an open-source JavaScript core, Vitessce (Visualization Tool for Exploration of Spatial Single-Cell Experimentation, Editor's term) abstracts complex data ingestion, coordination, and rendering into a unified declarative JSON schema. New developments such as EasyVitessce and Vitessce Link extend this paradigm: EasyVitessce automates the transition from static to interactive plots within the Python Scverse ecosystem, while Vitessce Link adds mixed reality and cross-device synchronization for web-native analysis of 3D tissue maps (Luo et al., 22 Oct 2025, Mörth et al., 6 Nov 2025). The following sections detail foundational design principles, technical architecture, software integration, interactivity mechanisms, extensibility, performance, and application scenarios.
1. Core Architecture and Data Coordination
Vitessce is architected around a three-part JSON configuration specifying datasets, coordinationSpace, and layout. Each dataset describes a unique object (AnnData, MuData, OME-Zarr, segmentations), formalized by a unique identifier and a set of file descriptors (e.g., CSV, .zarr, glTF mesh, or in-memory data). The coordinationSpace defines shared UI state variables—such as brush selections, spatial zoom, cell set membership, or selected gene—mapping declared keys (e.g., "cellSelection," "embeddingZoom") to runtime values. The layout block is an ordered specification of one or more views; each view is a React component (such as "scatterplot," "spatial," "dotplot," or "volume") with grid position, coordinationScopes (linking the view to state variables or dataset entries), and view-specific properties like color palette or coordinate mapping.
When rendered, Vitessce's React components read this configuration, fetch the required data files, and display each panel, automatically enabling brushing, selection, zoom/pan, and metadata synchronization across views that share coordination keys. For example, brushing a UMAP cluster in a scatterplot highlights the same selection in a spatial panel; changing the selected gene in a dot plot automatically recolors corresponding expression panels.
This architecture supports interactive linked panels across diverse data modalities (cells, genes, spatial images, segmentations) and domains—single-cell transcriptomics, imaging mass cytometry, lightsheet microscopy, and spatial omics. All operations (from selection to measurement to metadata filtering) update live across coordinated panels with low latency due to the shared state model and efficient data streaming.
2. Integration with Scverse and EasyVitessce
EasyVitessce introduces full automation of Vitessce widget generation for existing Python Scverse plotting environments, particularly Scanpy (sc.pl.*) and SpatialData-Plot (sdata.pl.*). At import, EasyVitessce performs a monkey-patch that redirects supported plotting calls to routines producing a live Vitessce widget instead of a static matplotlib or napari figure. This mapping abstracts all lower-level JSON configuration from the user: each call constructs dataset and view entries, updates relevant coordinationSpace keys, and renders the result as an ipywidget wrapper for the Vitessce React component.
Supported Scanpy plots include scatter plots (PCA, UMAP, t-SNE), dotplot, matrixplot, stacked_violin, violin, heatmap, and rank_genes_groups, as well as key SpatialData-Plot functions (render_shapes, render_points, render_images, render_labels). No changes are required to user code, aside from an enabling import. This enables seamless migration from static plots to interactive panels with features such as brushing-and-linking, dynamic tooltips, pan/zoom, and coordinated gene list selection—without any direct manipulation of configuration files.
Interactive configurations can be exported (e.g., widget.config.to_json()) for reuse in standalone web applications, React dashboards, or RShiny apps, ensuring cross-environment portability.
3. 2D, 3D, and Mixed Reality Visualization
Vitessce supports an extensible collection of 2D and 3D visualization components. The 2D suite encompasses spatial feature maps, heatmaps, scatterplots, bar charts, and data tables, each coordinated through shared state for immediate provenance tracking and cross-filtering. For 3D and mixed reality, the Vitessce Link framework augments the React dashboard with a WebXR-enabled stereoscopic view using Three.js, React Three Fiber, and custom GLSL shaders.
Volumetric data (e.g., multi-resolution OME-Zarr pyramids) and mesh segmentations are streamed and lazily loaded as Three.js textures and geometries, respectively. In mixed reality, users can navigate and manipulate volumes and meshes with hand gestures (pinch-to-translate/rotate, two-hand zoom, point-and-select, and measurement), with each interaction emitting a WebSocket message that synchronizes state bidirectionally across all client views (desktop, headset, notebook).
A unified coordination protocol ensures that selections, camera states, thresholds, and annotations remain in sync in real time—across multiple devices and even mixed-mode sessions (e.g., collaborative review between MR and 2D dashboard users).
4. User Interaction, Analytics, and Quantitative Features
Interaction techniques extend from simple selection to complex quantitative measurements. In mixed reality, hand/gesture controls enable translation, rotation, zoom, hover, selection, and direct spatial measurement. Measurements such as inter-object Euclidean distance are computed via
with events immediately emitting measurement messages and populating quantitative tables in 2D views (e.g., listing source and destination entities and distances in microns).
Segmentation overlap is assessed via the Dice coefficient:
and clustering quality via the silhouette score:
Analytical events—including selection, thresholding, and measurements—are transmitted at sub-50 ms latencies, with all linked panels re-rendered and updated accordingly. The system’s crossfilter pipeline ensures that derived-panel views, such as bar charts or colored scatterplots, respond instantly to filter changes, maintaining both analytic depth and interactivity.
5. Data Ingestion, Output, and Performance
Vitessce accepts data from standard single-cell and spatial sources: AnnData, MuData, SpatialData, and multi-resolution OME-Zarr pyramids. Segmentations are ingested as meshes (glTF, PLY) or as labeled arrays, and feature tables are accessed via direct HTTP streaming (for .zarr, .csv) or in-memory buffers.
Output formats include in-notebook rendering via ipywidgets (compatible with Jupyter, JupyterLab, Colab, Marimo) and production-grade JSON configuration export for deployment in browser-based frameworks. The viewer runs in any modern browser (desktop or headset), with WebGL2 acceleration for 2D and 3D rendering, and relies on CORS-enabled HTTP endpoints or cloud storage for data streaming.
Performance characteristics include interactive rendering up to 50–100k cells for WebGL scatterplots, stereo 3D rendering at 60 fps (Meta Quest 3), and dashboard filter update latencies on the order of 20–40 ms (depending on data size and client hardware). Lazy loading and multi-resolution downsampling are used for large image volumes (e.g., 200 GB lightsheet OME-Zarr) to optimize memory usage, though multi-million cell datasets may require chunking or downsampling for full interactivity. EasyVitessce does not currently support static plots for dendrograms or other non-mapped views; those are rendered as before.
6. Extensibility, Licensing, and Application Scenarios
Vitessce and its extensions are MIT-licensed, with open-source code and APIs for further customization. Custom views and visualization layers can be registered via the plugin system, enabling the addition of novel 3D shaders, analytic transforms, or composite layout modules as NPM packages. JSON configuration post-processing supports arbitrary panel composition and appearance tweaking for advanced dashboard authors.
Application contexts span a wide range of spatial omics and biomedical imaging workflows. Notable scenarios include nephrology (e.g., 200 GB lightsheet kidney data, glomerular segmentation overlays) and oncology (e.g., 50-channel 3D CyCIF melanoma imaging) where interactive multi-panel, MR/2D synchronized exploration, segmentation assessment, quantitative measurement, and data-driven insight are required. Embedding options exist for Python (vitessce-jupyter), R (vitessce-R), and direct JavaScript environments, facilitating cross-platform, cross-language adoption.
The use of web standards (WebGL2, WebXR, HTTP-range requests, CORS) ensures broad compatibility and minimizes the requirement for bespoke server infrastructure. The pipeline supports both notebook-centric and production dashboard deployments, with future extensions projected to include R/Seurat integration, multi-panel programmatic API composition, and deeper Scanpy backend optimization for large-scale streaming.
7. Representative Code Snippets and Workflows
Python (EasyVitessce, Jupyter):
1 2 3 4 5 6 7 8 |
import easy_vitessce as ev ev.enable() import scanpy as sc adata = sc.datasets.pbmc3k() sc.pp.pca(adata) sc.pp.neighbors(adata) sc.tl.umap(adata) sc.pl.umap(adata, color="louvain") |
1 2 3 4 |
widget = sc.pl.umap(adata, color="louvain", return_widget=True) cfg = widget.config with open("pbmc_umap_vitessce.json", "w") as out: out.write(cfg.to_json()) |
1 2 3 4 5 6 7 8 9 10 11 |
from vitessce import VitessceConfig, VitessceWidget cfg = VitessceConfig() ds = cfg.add_dataset(uid="kidney_ls", files={ "image": {"type": "zarr", "url": "..."}, "seg": {"type": "mesh", "url": "..."} }) cfg.add_volume(components=["image", "seg"]) cfg.add_spatial(components=["image", "seg"]) cfg.add_table(components=["seg"]) widget = VitessceWidget(config=cfg.to_dict(), height=500) widget |
Table: Major Components and Coordination
| Component | Data Types Supported | Coordination Example |
|---|---|---|
| Scatterplot | AnnData, MuData (embeddings) | cellSelection, embeddingZoom |
| Spatial Feature Map | OME-Zarr, label images, meshes | spatialZoom, cellSet |
| Volume (3D) | OME-Zarr pyramids, meshes | camera, measurement |
| Dotplot/Heatmap | AnnData, SpatialData (var x obs) | gene, cellSet, colorScaling |
| Table | Metadata, segmentation attributes | row selection, measurement |
The infrastructure rendered by Vitessce and its EasyVitessce and Link extensions provides a comprehensive, performant, web-native platform for interactive, reproducible analysis of high-dimensional single-cell and spatial omics data, unifying static and interactive, 2D and 3D, and desktop and immersive modalities within a rigorously coordinated analytic ecosystem (Luo et al., 22 Oct 2025, Mörth et al., 6 Nov 2025).