Papers
Topics
Authors
Recent
Search
2000 character limit reached

U-Time Model for Sleep Stage Segmentation

Updated 2 February 2026
  • U-Time is a fully convolutional deep learning architecture that segments physiological time-series for sleep stage classification by adapting U-Net with dilated convolutions.
  • The model bypasses recurrent networks by using stacked dilated convolutions and multi-scale pooling to aggregate long-range temporal context efficiently.
  • Empirical evaluations across diverse EEG datasets show that U-Time achieves high F1 scores with consistent performance and minimal hyperparameter tuning.

U-Time is a fully feed-forward deep learning architecture for physiological time-series segmentation, specifically introduced for automated sleep stage classification. Based on a temporal adaptation of the U-Net convolutional architecture, U-Time directly maps multichannel sequential inputs of arbitrary duration to per-segment class label predictions, achieving state-of-the-art results in a robust, non-recurrent framework. This model addresses key limitations of recurrent neural network-based approaches, such as tuning complexity and lack of robustness across datasets, by leveraging stacked dilated convolutions and multi-scale pooling to model long-range temporal dependencies without recurrence (Perslev et al., 2019).

1. Problem Formulation and Data Representation

Given CC channels of input (e.g., EEG, EOG, EMG) sampled at rate %%%%1%%%%, U-Time processes temporal windows covering TT consecutive segments, each of duration i=S/ei = S/e samples, where ee is the chosen segmention frequency (e.g., e=1/30e=1/30 Hz for 30s sleep staging windows). The raw input xRT×i×C\mathbf{x} \in \mathbb{R}^{T \times i \times C} is equivalently represented as CC channels of a 1D signal with length t=Tit = T \cdot i.

The mapping f(;θ)f(\cdot;\theta) produces class-confidence scores for KK stages per segment: f(x;θ):RT×i×CRT×K,y^{1,2,,K}T.f(\mathbf{x};\theta): \mathbb{R}^{T \times i \times C} \longrightarrow \mathbb{R}^{T \times K}, \qquad \hat{\mathbf{y}} \in \{1,2,\dots,K\}^T. Internally, dense segmentation is first performed at the original sampling rate (output: t×Kt \times K scores), then aggregated by non-overlapping mean-pooling over intervals of ii samples per segment. For segment nn and class kk: sn,k=1ij=(n1)i+1nis~j,k,n=1,,T,s_{n,k} = \frac{1}{i} \sum_{j=(n-1)i+1}^{n i} \tilde{s}_{j,k}, \qquad n=1, \dots, T, where s~j,k\tilde{s}_{j,k} are pre-pooled decoder outputs. Softmax normalization produces probabilities for each segment.

2. Architectural Overview

U-Time is a 1D U-Net variant with an encoder-decoder topology supporting multi-resolution temporal context aggregation via skip connections.

  • Encoder (contracting path): Four downsampling levels; each block comprises two dilated convolutions with kernel size k=5k=5 (dilation d=9d=9 yields effective kernel keff=37k_\text{eff}=37), batch normalization and ReLU, followed by max-pooling (window sizes [10,8,6,4][10,8,6,4]). This enables temporal length reduction by cumulative factor 10864=192010 \cdot 8 \cdot 6 \cdot 4 = 1920.
  • Bottleneck: Following the deepest encoder level, two additional convolutions (same kernel/dilation) generate the deepest feature map.
  • Decoder (expanding path): Four upsampling blocks, each performing nearest-neighbor upsampling (by aforementioned window sizes), halving channels via convolution, then concatenating with the corresponding encoder feature map (skip connections), followed by two further convolutions per block.
  • Output: A final 1×11 \times 1 convolution produces a dense t×Kt \times K score map, which is pooled and softmax-normalized per segment.

Typical feature map dimensions, for C=1C=1 input and 105,000 samples (35 segments × 3,000 samples at 100 Hz):

  • Level 0: 105,000×16105,000 \times 16
  • Level 1: 10,500×3210,500 \times 32
  • Level 2: 1,312×641,312 \times 64
  • Level 3: 218×128218 \times 128
  • Level 4: 54×25654 \times 256

3. Temporal Resolution and Receptive Field

Stacked dilated convolutions and deep pooling yield an extensive temporal receptive field. For kernel size k=5k=5, dilation d=9d=9: keff=1+(k1)d=1+49=37k_{\rm eff} = 1 + (k-1)d = 1 + 4\cdot9 = 37 Stacking two such convolutions per encoder block and four levels of pooling allows the model’s receptive field to span about 5.5 minutes at 100 Hz (approximately 33,000 samples). This large receptive field is achieved without recurrent connections, enabling the model to aggregate context over minutes of physiological data.

4. Output, Loss Functions, and Optimization

After decoding, per-segment class probabilities are computed by averaging the dense scores in non-overlapping windows: Sn,k=1ij=(n1)i+1niS~j,k,n=1,,T,S_{n,k} = \frac{1}{i} \sum_{j=(n-1)i+1}^{ni} \tilde{S}_{j,k}, \qquad n=1,\ldots,T, followed by: pn,k=exp(Zn,k)kexp(Zn,k)p_{n,k} = \frac{\exp(Z_{n,k})}{\sum_{k'}\exp(Z_{n,k'})} where Zn,kZ_{n,k} are logits from a 1×11 \times 1 convolution.

Loss functions:

  • Generalized Dice loss (used to mitigate class imbalance): Ldice=12Kn,kyn,ky^n,kn,kyn,k+n,ky^n,k\mathcal{L}_\mathrm{dice} = 1 - \frac{2}{K} \frac{\sum_{n,k} y_{n,k}\hat{y}_{n,k}} {\sum_{n,k} y_{n,k} + \sum_{n,k} \hat{y}_{n,k}}
  • Cross-entropy loss (alternative): LCE(θ)=n=1Tk=1Kyn,klogpn,k(θ)\mathcal{L}_\mathrm{CE}(\theta) = - \sum_{n=1}^T \sum_{k=1}^K y_{n,k} \log p_{n,k}(\theta) where yn,ky_{n,k} is the true one-hot label, y^n,k=pn,k\hat{y}_{n,k} = p_{n,k}.

5. Training Strategy and Hyperparameterization

U-Time is trained with the Adam optimizer (β1=0.9,β2=0.999,ϵ=108\beta_1=0.9, \beta_2=0.999, \epsilon=10^{-8}, learning rate η=5106\eta=5 \cdot 10^{-6}) using a batch-size of B=12B=12 windows, each window covering T=35T=35 target segments. Class-balanced sampling ensures each batch window contains at least one occurrence of each target class. Early stopping is applied with a patience of 150 epochs. No explicit regularization (dropout or weight decay) is used, with ≈1.2 million trainable parameters. No dataset-specific hyperparameter tuning was required; the architecture and parameters remained unchanged across all datasets.

6. Empirical Evaluation and Benchmarking

Extensive evaluation was performed across seven EEG datasets:

  • Sleep-EDF-39, Sleep-EDF-153 (R&K, 100 Hz)
  • PhysioNet-2018 (AASM, 200 Hz)
  • DCSM (AASM, 256 Hz)
  • ISRUC (AASM, 200 Hz)
  • CAP (R&K, 100–512 Hz)
  • SVUH-UCD (R&K, 128 Hz)

The primary performance metric was the global per-class F1 (Dice) score. Example results: on Sleep-EDF-39, U-Time achieved F1 = [Wake: 0.87, N1: 0.52, N2: 0.86, N3: 0.84, REM: 0.84], with a mean of 0.79; on ISRUC, mean ≈ 0.77, commensurate with human inter-rater reliability (≈0.80 for that dataset). Multi-channel variants (EEG+EOG, EEG+EOG+EMG) yielded further gains for challenging classes like REM.

7. Advantages, Limitations, and Extensions

Advantages

  • Feed-forward, fully convolutional architecture circumvents tuning instability of RNN-based models.
  • Accepts arbitrary-length input: facilitates full-length PSG inference in a single pass.
  • Adaptable output resolution at inference (e.g., per-segment or per-sample labeling).
  • Large receptive field without recurrence, robust context aggregation via pooling and dilation.
  • Consistent hyperparameterization across heterogeneous datasets.

Limitations and Extensions

  • Single-channel models cannot exploit modalities such as EOG or EMG; multi-channel extensions are available.
  • Assumes continuous, fixed-length windowing; application to irregular or discontinuous data would require adaptation.
  • Potential extensions include attention-based fusion, learned (transposed convolution) upsampling, and adversarial domain adaptation.

U-Time thus constitutes a robust, general fully-convolutional solution for physiological time-series segmentation, demonstrating high empirical performance and user-friendly deployment owing to its minimal requirements for architecture/hyperparameter tuning and its scalability to variable input and output granularities (Perslev et al., 2019).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to U-Time Model.