Block Semi-Separable Matrix (Block-SSD)
- Block-SSD matrices are structured dense matrices that admit low-rank factorizations of off-diagonal blocks, facilitating efficient matrix computations.
- They represent the level-0 instance of hierarchically block separable matrices, linking single-level compression directly to multilevel skeletonization techniques.
- Their fast solvers achieve near-linear complexity for dense operator equations, making them ideal for radial kernel approximations and equality-constrained least-squares problems.
A block semi-separable matrix (block-SSD) is a structured matrix arising as the level-0 (single-level) special case within the broader hierarchically block separable (HBS) framework. Such matrices are typically dense yet data-sparse, characterized by the property that their off-diagonal blocks admit low-rank factorizations. Block-SSD matrices and their associated fast algorithms enable efficient direct and least-squares solvers for problems exhibiting nonoscillatory, asymptotically smooth behavior, including operators derived from radial kernels. Their structure facilitates near-linear complexity in both algorithmic factorization and solution procedures in a dimensionality-dependent manner (Ho et al., 2012).
1. Mathematical Definition and Structural Properties
Let partitioned into blocks, where the th row block has size and the th column block has size :
A matrix is block semi-separable (block-SSD) if every off-diagonal block () admits a low-rank factorization:
with , , and . Defining , , , and as the block-matrix with , the block-SSD admits the canonical single-level semi-separable form:
This structure is precisely the level-0 realization of the multilevel HBS decomposition, in which higher levels correspond to further hierarchy and recursive partitioning.
2. Connection to Hierarchically Block Separable (HBS) Matrices
Block-SSD matrices function as a foundational element within the HBS paradigm, which embeds blockwise low-rank approximability at multiple levels of partitioning. In HBS, index sets are recursively split and assembled in a tree of depth . At each level , off-diagonal blocks, indexed by the corresponding tree nodes, must admit factorizations analogous to the block-SSD form. The telescoping, multilevel decomposition is defined recursively via blockwise interpolative decompositions:
For block-SSD, this hierarchy truncates at level-0, directly yielding without further recursion.
3. Recursive Skeletonization and Matrix Compression
The key methodology for exploiting block-SSD/HBS structure is recursive skeletonization. For general HBS matrices, skeletonization proceeds bottom-up from the leaves of the partitioning tree. At the finest level (), each off-diagonal block is approximated by an interpolative decomposition (ID):
with and extracting row/column skeletons. The process ascends levels by forming reduced systems , repartitioning, and recompressing. For block-SSD (level-0), no recursion is needed; the compression reflects the single-level structure, and the explicit factorization suffices.
4. Equality-Constrained Least Squares Embedding
Solving least-squares problems, , with block-SSD structure invokes a sparse, equality-constrained embedding. The telescoping HBS form introduces auxiliary variables at each level. Collecting these in block vectors, the constrained system is:
where both and are sparse matrices derived from the telescoped ID compressions, and comprises the multilevel variables. For block-SSD, the embedding reduces to:
yielding a system of size , , thus maintaining sparsity and computational favorability.
5. Semi-Direct Least Squares Solver
The fast semi-direct solver operates in two phases: direct precomputation and iterative least-squares refinement.
Precomputation (Direct Phase)
- Compress via (possibly recursive) skeletonization to precision , yielding sparse .
- Form the weighted matrix:
- Compute sparse QR decomposition of .
Solve Phase (Deferred-Correction Refinement)
For each right hand side :
- Solve for using back-substitution.
- Initialize residuals and Lagrange multipliers.
- Perform up to two deferred-correction steps:
- Augment the right-hand side and solve for correction .
- Update the solution, residuals, and multipliers.
It is shown that for well-conditioned no more than two correction steps suffice (Ho et al., 2012).
6. Computational Complexity and Dimensional Scaling
The asymptotic complexity depends on both the block ranks and the ambient dimension (), with the depth . Let denote the off-diagonal rank at level ; empirical scaling for singular kernels is:
Specific complexity results are:
| Phase | |||
|---|---|---|---|
| Compression, QR | |||
| Solve |
If sources and targets are well-separated, and all complexities collapse to .
7. Specialization and Significance
For matrices that are exactly block-SSD (level-0), recursive skeletonization is unnecessary; the semi-separable representation is direct, and the equality-constrained solver operates natively. The sparse system increases only by auxiliary variables, preserving scalability:
This specialization demonstrates that the classical fast SSD direct solvers are recovered within the HBS formalism, providing a unified theoretical and algorithmic framework. The block-SSD class is thus pivotal for efficient numerical solution of dense operator equations with underlying data sparsity, especially in contexts involving radial basis function approximation, updating, and downdating (Ho et al., 2012).
Block semi-separable matrices represent the atomic case of hierarchically structured data-sparse matrices. Their innate low-rank block structure, efficient skeletal compressions, and compatibility with modern sparse linear algebra techniques enable rapid dense linear-solving procedures with near-linear complexity scaling in problem size and spatial dimension.