Papers
Topics
Authors
Recent
Search
2000 character limit reached

SegNSP: Neural Text Segmentation

Updated 14 January 2026
  • SegNSP is a neural approach that reframes linear text segmentation as a next sentence prediction task to detect topic transitions without explicit labels.
  • It leverages a BERT-based encoder and a segmentation-aware loss that integrates focal loss, confidence penalty, and boundary loss to address class imbalance and boundary sparsity.
  • SegNSP achieves superior Boundary F1 scores on public datasets, enhancing downstream tasks like summarization, information retrieval, and question answering.

SegNSP is a neural approach to linear text segmentation in NLP that formulates the segmentation task as a next sentence prediction (NSP) problem. It leverages input representations and learning objectives specifically tailored to identifying segment boundaries, such as topic transitions, without the need for explicit topic labels or taxonomies. SegNSP achieves state-of-the-art results on public English and Portuguese segmentation benchmarks, demonstrating significant improvements over classical and neural baselines and offering robust, label-agnostic performance for segmenting continuous text into coherent, semantically meaningful units (Isidro et al., 7 Jan 2026).

1. Linear Text Segmentation as Next Sentence Prediction

SegNSP approaches linear text segmentation by explicitly modeling sentence-to-sentence continuity using the NSP formalism. Given a document D=(s1,s2,...,sn)D = (s_1, s_2, ..., s_n) split into nn sentences and a segmentation GG consisting of mm contiguous segments, a segment boundary is defined to exist between sentences sis_i and si+1s_{i+1} if they belong to different segments. For each adjacent sentence pair (si,si+1)(s_i, s_{i+1}), the model constructs the input representation as [CLS] si [SEP] si+1 [SEP][CLS]\ s_i\ [SEP]\ s_{i+1}\ [SEP], and encodes it with a pretrained BERT model to obtain h[CLS]h_{[CLS]}.

A linear classification head projects h[CLS]h_{[CLS]} to two logits, applying softmax to yield the probability distribution P(ysi,si+1)=softmax(Wh[CLS]+b)P(y|s_i, s_{i+1}) = softmax(W \cdot h_{[CLS]} + b), where y{is_next,not_next}y \in \{\text{is\_next}, \text{not\_next}\}. During inference, a boundary is predicted at position ii if P(y=not_nextsi,si+1)>τP(y=\text{not\_next}|s_i, s_{i+1}) > \tau, with threshold τ=0.5\tau=0.5 tuned on validation data (Isidro et al., 7 Jan 2026).

2. Label-Agnostic NSP Formulation and Segmentation-Aware Loss

SegNSP uses a label-agnostic variant of next sentence prediction. Each sentence pair (si,si+1)(s_i, s_{i+1}) receives a positive label (yi=1y_i=1) if the next sentence continues the same topic, and negative (yi=0y_i=0) at a topic boundary. No explicit topic labels or external taxonomies are required—only binary next/boundary information.

The segmentation-aware loss combines three components:

  • Focal loss Lfocal\mathcal{L}_{focal} to address class imbalance:

Lfocal=i=1Nαti(1p^i,ti)γlogp^i,ti\mathcal{L}_{focal} = -\sum_{i=1}^N \alpha_{t_i}(1 - \hat{p}_{i,t_i})^\gamma \log \hat{p}_{i,t_i}

with γ=1.5,α=0.8\gamma=1.5, \alpha=0.8.

  • Confidence penalty Lconf\mathcal{L}_{conf} to penalize overconfident predictions:

Lconf=i=1Nc{0,1}p^i,clogp^i,c\mathcal{L}_{conf} = -\sum_{i=1}^N \sum_{c \in \{0,1\}} \hat{p}_{i,c} \log \hat{p}_{i,c}

  • Boundary loss Lbound\mathcal{L}_{bound} to up-weight errors near true boundaries:

Lbound=i=1Nwi[yilogp^i,1(1yi)logp^i,0]\mathcal{L}_{bound} = \sum_{i=1}^N w_i [-y_i \log \hat{p}_{i,1} - (1-y_i)\log \hat{p}_{i,0}]

where wi=1+I[distance(si,si+1)δ]w_i = 1 + I[\text{distance}(s_i, s_{i+1}) \leq \delta].

The total loss is Lseg=Lfocal+λ1Lconf+λ2Lbound\mathcal{L}_{seg} = \mathcal{L}_{focal} + \lambda_1 \mathcal{L}_{conf} + \lambda_2 \mathcal{L}_{bound}, with λ1=0.15,λ2=0.2\lambda_1=0.15, \lambda_2=0.2. This design targets both the sparsity and difficulty of boundary events, addressing local discourse phenomena crucial for accurate segmentation (Isidro et al., 7 Jan 2026).

3. Hard Negative Sampling

SegNSP mitigates the sparsity of true segment boundaries using an augmentation strategy that introduces challenging negative samples during training. Each mini-batch (NN) includes:

  • 70% positive (intra-segment) adjacent pairs,
  • 30% negative (inter-segment) adjacent pairs,
  • up to 10 "hard negatives" per document, which are non-adjacent sentence pairs (si,sj)(s_i, s_j) with ij>1|i-j|>1.

If Nneg=0.3NN_{neg}=0.3N, then the negatives are split as Nhard=min(10,Nneg)N_{hard} = \min(10,N_{neg}) hard negatives from H(D)H(D), the set of non-adjacent pairs, and Nadj_neg=NnegNhardN_{adj\_neg} = N_{neg} - N_{hard} adjacent true negatives. This approach targets discourse cues and topic discontinuities beyond immediate adjacency, increasing robustness to complex topic transitions (Isidro et al., 7 Jan 2026).

4. Model Architecture, Optimization, and Hyperparameters

SegNSP employs a BERT-base encoder (Portuguese-cased for CitiLink-Minutes, English uncased for WikiSection), followed by a single linear layer mapping h[CLS]h_{[CLS]} to R2\mathbb{R}^2 and softmax for classification. The entire model is fine-tuned with the segmentation-aware loss and uses early stopping based on validation boundary F1_1 (B-F1_1) score.

Key hyperparameters include:

  • Learning rate: 5×1065 \times 10^{-6}
  • Batch size: 8
  • Focal loss: γ=1.5,α=0.8\gamma=1.5, \alpha=0.8
  • Confidence penalty: λ1=0.15\lambda_1=0.15
  • Boundary loss: λ2=0.2\lambda_2=0.2
  • Maximum epochs: 12 (with early stopping)
  • Boundary decision threshold: τ=0.5\tau=0.5 (Isidro et al., 7 Jan 2026)

5. Evaluation Benchmarks and Boundary F1_1 Metric

Performance is evaluated on two datasets:

  • WikiSection_en_city: 19,539 English Wikipedia city articles, with 133,642 annotated segments. Preprocessing involves standard sentence tokenization and selection of the en_city partition.
  • CitiLink-Minutes: 120 Portuguese city council minutes from six municipalities, grouping headings and their textual spans as segments, then sentence-tokenizing the result.

Segmentation accuracy is assessed via the Boundary F1_1 (B-F1_1) metric. Defining BB as the set of true boundary positions and B^\hat{B} as predicted, precision and recall are:

P=BB^B^,R=BB^BP = \frac{|B \cap \hat{B}|}{|\hat{B}|}, \quad R = \frac{|B \cap \hat{B}|}{|B|}

B-F1=2PRP+R=2BB^B+B^\text{B-F}_1 = \frac{2PR}{P+R} = \frac{2|B \cap \hat{B}|}{|B|+|\hat{B}|}

6. Experimental Results and Comparative Analysis

SegNSP demonstrates substantial improvements over both classical and neural segmentation baselines. The following table summarizes B-F1_1 scores:

Model CitiLink-Min. B-F1_1 WikiSection B-F1_1
TextTiling 0.15 0.09
Att+CNN 0.34 0.14
TopSeg 0.42 0.48
LumberChunker (LLM) 0.10 0.42
SegNSP 0.79 0.65
  • On CitiLink-Minutes, SegNSP achieves B-F1_1 = 0.79, outperforming TopSeg by +0.37.
  • On WikiSection, SegNSP achieves B-F1_1 = 0.65, outperforming TopSeg by +0.17.
  • Additional metrics: for CitiLink-Minutes, Pk=0.08P_k=0.08, WD=0.10, B=0.59; for WikiSection, Pk=0.14P_k=0.14, WD=0.18, B=0.47.
  • Statistical significance is established with paired bootstrap, p<0.01p < 0.01 against TopSeg for both datasets.
  • Cross-municipality generalization (CitiLink leave-one-out) yields B-F1_1 between 0.24 and 0.77 depending on locality, indicating both robustness and some sensitivity to stylistic variance (Isidro et al., 7 Jan 2026).

7. Implications for Downstream NLP Tasks

SegNSP enhances downstream task performance through high-precision segment boundary induction:

  • Summarization: Precise boundaries yield coherent segments, reducing topic drift and facilitating passage-level abstraction.
  • Information Retrieval: Segment-level retrieval units allow for finer indexing, improving passage recall in retrieval-augmented generation pipelines.
  • Question Answering: Segmented contexts decrease noise in retrieval and generation, leading to more accurate response extraction.

Overall, SegNSP provides a lightweight, label-agnostic, and cross-domain segmentation mechanism suited for diverse NLP pipelines and tasks requiring structured document representations (Isidro et al., 7 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SegNSP.