Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

S2 Chunking: A Hybrid Framework for Document Segmentation Through Integrated Spatial and Semantic Analysis (2501.05485v1)

Published 8 Jan 2025 in cs.CL, cs.IR, and cs.LG

Abstract: Document chunking is a critical task in NLP that involves dividing a document into meaningful segments. Traditional methods often rely solely on semantic analysis, ignoring the spatial layout of elements, which is crucial for understanding relationships in complex documents. This paper introduces a novel hybrid approach that combines layout structure, semantic analysis, and spatial relationships to enhance the cohesion and accuracy of document chunks. By leveraging bounding box information (bbox) and text embeddings, our method constructs a weighted graph representation of document elements, which is then clustered using spectral clustering. Experimental results demonstrate that this approach outperforms traditional methods, particularly in documents with diverse layouts such as reports, articles, and multi-column designs. The proposed method also ensures that no chunk exceeds a specified token length, making it suitable for use cases where token limits are critical (e.g., LLMs with input size limitations)

Summary

  • The paper introduces S2 Chunking, a hybrid framework that combines spatial (layout) and semantic (text embedding) analysis using graph clustering for improved document segmentation.
  • Results show S2 Chunking achieves higher cohesion (0.92) and layout consistency (0.88) compared to semantic-only methods (cohesion ~0.8, layout ~0.5) and other baselines on test datasets.
  • This framework is suitable for large language model applications, such as retrieval-augmented generation, by producing semantically and spatially coherent chunks that respect token constraints.

The paper introduces a hybrid framework, termed S2 Chunking, for document segmentation, integrating spatial and semantic analyses to enhance the cohesion and accuracy of document chunks. The approach addresses the limitations of traditional methods that often rely solely on semantic analysis, overlooking the importance of spatial layout in understanding relationships within complex documents.

The core innovation lies in leveraging bounding box (bbox) information and text embeddings to construct a weighted graph representation of document elements, which is then clustered using spectral clustering. This method ensures that chunks are both semantically coherent and spatially consistent. The framework also incorporates a dynamic clustering mechanism that respects token length constraints, making it suitable for applications with input size limitations, such as LLMs for retrieval-augmented generation (RAG).

The paper discusses several existing document chunking methods:

  • Fixed-Size Chunking: This simple method divides text into chunks of a predefined size ss, without considering the content or structure. The set of chunks CC is defined as:

    C={T[is:(i+1)s]i=0,1,,Ts}C = \{ T[i \cdot s : (i+1) \cdot s] \mid i = 0, 1, \dots, \lfloor \frac{|T|}{s} \rfloor \}

    where T|T| represents the total length of the text. An overlap parameter oo can be introduced to create overlapping chunks:

    C={T[i(so):i(so)+s]i=0,1,,Tsso}C = \{ T[i \cdot (s - o) : i \cdot (s - o) + s] \mid i = 0, 1, \dots, \lfloor \frac{|T| - s}{s - o} \rfloor \}

    • CC: Set of chunks
    • TT: Input text
    • ss: Predefined size
    • T|T|: Total length of text
    • ii: Index
    • oo: Overlap parameter
  • Recursive Chunking: This method divides text hierarchically using a set of separators S={s1,s2,,sn}S = \{ s_1, s_2, \dots, s_n \}. The recursive chunking process is defined as:

    C=RecursiveSplit(T,S)C = \text{RecursiveSplit}(T, S)

    where: RecursiveSplit(T,S)={Tamp;if Ts siSRecursiveSplit(Tk,S)amp;otherwise\text{RecursiveSplit}(T, S) = \begin{cases} { T } & \text{if } |T| \leq s \ \bigcup_{s_i \in S} \text{RecursiveSplit}(T_k, S) & \text{otherwise} \end{cases}*CC: Set of chunks

    • TT: Input text
    • SS: Set of separators
    • sis_i: Separator
    • TkT_k: Substrings obtained by splitting TT using the separator sis_i
  • Semantic Chunking: This method uses text embeddings to group semantically related content. The similarity between two embeddings eie_i and eje_j is computed using cosine similarity:

    sim(ei,ej)=eiejeiej\text{sim}(e_i, e_j) = \frac{e_i \cdot e_j}{\|e_i\| \|e_j\|}

    The chunking process is defined as:

    C={Tksim(E(Tk),E(Tk+1))τ}C = \{ T_k \mid \text{sim}(E(T_k), E(T_{k+1})) \geq \tau \}

    • eie_i: Embedding
    • eje_j: Embedding
    • sim(ei,ej)\text{sim}(e_i, e_j): Similarity between embeddings eie_i and eje_j
    • EE: Embedding function
    • TkT_k: Text segment
    • τ\tau: Similarity threshold

The methodology involves region detection and layout ordering, followed by graph construction, weight calculation, and clustering. The document is represented as a graph G=(V,E)G = (V, E), where VV is the set of nodes corresponding to document elements, and EE is the set of edges representing relationships between these elements. Edge weights are calculated using a combination of spatial and semantic information. Spatial weights are calculated using the Euclidean distance between bounding box centroids:

wspatial(i,j)=11+d(i,j)w_{\text{spatial}(i, j)} = \frac{1}{1 + d(i, j)}

  • wspatial(i,j)w_{\text{spatial}(i, j)}: Spatial weight between elements ii and jj
  • d(i,j)d(i, j): Distance between centroids of elements ii and jj

Semantic weights are computed using text embeddings from a pre-trained LLM:

wsemantic(i,j)=cosine_similarity(embedding(i),embedding(j))w_{\text{semantic}(i, j)} = \text{cosine\_similarity}(\text{embedding}(i), \text{embedding}(j))

  • wsemantic(i,j)w_{\text{semantic}(i, j)}: Semantic weight between elements ii and jj

The final edge weights are the average of spatial and semantic weights:

wcombined(i,j)=wspatial(i,j)+wsemantic(i,j)2w_{\text{combined}(i, j)} = \frac{w_{\text{spatial}(i, j)} + w_{\text{semantic}(i, j)}}{2}

  • wcombined(i,j)w_{\text{combined}(i, j)}: Combined weight between elements ii and jj

The graph is then partitioned into cohesive chunks using spectral clustering.

The authors evaluated the approach on datasets from PubMed and arXiv, selected for their diversity in content, layout, and domain-specific complexity. The performance metrics include:

  • Cohesion Score: Measures the semantic coherence of chunks using the average pairwise cosine similarity of text embeddings within each chunk.
  • Layout Consistency Score: Measures the spatial consistency of chunks using the average pairwise proximity of bounding boxes within each chunk.
  • Purity: Measures how well chunks align with ground truth categories.
  • Normalized Mutual Information (NMI): Measures the agreement between chunking results and ground truth labels.

The S2 Chunking method achieved a cohesion score of 0.85 and a layout consistency score of 0.82 on the PubMed dataset, and a cohesion score of 0.88 and a layout consistency score of 0.85 on the arXiv dataset. These results indicate that the proposed hybrid approach outperforms baseline methods such as fixed-size chunking, recursive chunking, and semantic chunking. For instance, semantic chunking achieved high cohesion scores (0.80 and 0.82) but lower layout consistency scores (0.50 and 0.55), highlighting the advantage of integrating spatial information. A table in the paper shows that the S2 Chunking method achieves a Cohesion Score of 0.92, Layout Consistency Score of 0.88, Purity of 0.96, and NMI of 0.93, which are all higher than the comparison methods.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets