Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 62 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 78 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 423 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Towards Quantifying the Hessian Structure of Neural Networks (2505.02809v1)

Published 5 May 2025 in cs.LG, math.OC, and stat.ML

Abstract: Empirical studies reported that the Hessian matrix of neural networks (NNs) exhibits a near-block-diagonal structure, yet its theoretical foundation remains unclear. In this work, we reveal two forces that shape the Hessian structure: a static force'' rooted in the architecture design, and adynamic force'' arisen from training. We then provide a rigorous theoretical analysis of ``static force'' at random initialization. We study linear models and 1-hidden-layer networks with the mean-square (MSE) loss and the Cross-Entropy (CE) loss for classification tasks. By leveraging random matrix theory, we compare the limit distributions of the diagonal and off-diagonal Hessian blocks and find that the block-diagonal structure arises as $C \rightarrow \infty$, where $C$ denotes the number of classes. Our findings reveal that $C$ is a primary driver of the near-block-diagonal structure. These results may shed new light on the Hessian structure of LLMs, which typically operate with a large $C$ exceeding $104$ or $105$.

Summary

  • The paper reveals that the Hessian’s near-block-diagonal structure is driven by a static force from architecture and a dynamic force from training.
  • It employs random matrix theory with Lindeberg interpolation to decouple dependencies in linear models and 1-hidden-layer networks under MSE and CE losses.
  • The findings show that increasing the number of classes reduces off-diagonal block influence at rates of O(1/C) or O(1/C^2), guiding efficient optimizer design.

This paper (2505.02809) investigates the long-standing empirical observation that the Hessian matrix of neural networks exhibits a near-block-diagonal structure. While this phenomenon has been reported in prior work, its theoretical underpinnings have remained unclear. The authors reveal that this structure is influenced by two factors: a "static force" determined by the network architecture and a "dynamic force" arising from the training process. This work provides a rigorous theoretical analysis primarily focusing on the "static force" at random initialization for linear models and 1-hidden-layer networks under both Mean-Square Error (MSE) and Cross-Entropy (CE) losses.

The practical importance of understanding the Hessian structure lies in its connection to optimization algorithms and training dynamics. Diagonal preconditioners like Adam and block-diagonal methods like Shampoo and Muon have shown empirical success, which is believed to be related to the Hessian's structure. Understanding this structure can lead to the design of more efficient optimizers, such as Adam-mini, which leverages the near-block-diagonal property for memory reduction.

The authors challenge the previous notion that CE loss is the primary driver of the near-block-diagonal structure. Through empirical studies (Figure 1 and 4) on synthetic Gaussian data, they show that for 1-hidden-layer networks:

  • Under both MSE and CE losses, the hidden-layer (HwwH_{ww}) and output-layer (HvvH_{vv}) Hessians exhibit near-block-diagonal structures, which persist throughout training. This is attributed to the "static force."
  • Under CE loss at random initialization, the cross-layer Hessian (HwvH_{wv}) shows a distinct "block-circulant" pattern (Figure 4a). This pattern diminishes during training (Figure 4b-f), attributed to the "dynamic force." This "block-circulant-block-diagonal" structure is a novel observation.

The theoretical analysis focuses on quantifying the relative magnitudes of diagonal and off-diagonal blocks using the Frobenius norm, particularly in the asymptotic regime where the input dimension (dd) and sample size (NN) grow proportionally (d/Nγ>0d/N \rightarrow \gamma > 0). The key finding is that the number of classes (CC) is a primary driver of the near-block-diagonal structure at initialization.

For linear models with CE loss (Theorem 1), the ratio of the squared Frobenius norm of off-diagonal blocks (2vivjF2\|\frac{\partial^2\ell}{\partial v_i \partial v_j^\top}\|_F^2) to diagonal blocks (2viviF2\|\frac{\partial^2\ell}{\partial v_i \partial v_i^\top}\|_F^2) vanishes at the rate O(1/C2)\mathcal{O}(1/C^2) as CC \rightarrow \infty. This implies the Hessian becomes block-diagonal with CC blocks, where each block corresponds to the weights associated with a single class.

For 1-hidden-layer networks (Theorem 2), considering the hidden-layer Hessian (HwwH_{ww}) and output-layer Hessian (HvvH_{vv}):

  • For HwwH_{ww} (affecting wiRdw_i \in \mathbb{R}^d), the ratio of off-diagonal to diagonal block norms decays at the rate O(1/C)\mathcal{O}(1/C) for both MSE and CE losses as CC \rightarrow \infty. This suggests block-diagonal structure with mm blocks (one for each hidden neuron).
  • For HvvH_{vv} (affecting viRmv_i \in \mathbb{R}^m), the ratio decays at the rate O(1/C2)\mathcal{O}(1/C^2) for CE loss as CC \rightarrow \infty. MSE loss for HvvH_{vv} is already strictly block-diagonal. This suggests block-diagonal structure with CC blocks (one for each output neuron).

These theoretical results, indicating the Hessian sub-matrices become block-diagonal as CC increases, align with the empirical observations that large CC promotes the near-block-diagonal structure (Figures 5, B.2, B.3).

The core technical challenge in proving these results lies in analyzing random matrices of the form 1NXNΛNXN\frac{1}{N} X_N \Lambda_N X_N^\top or similar structures where the matrix ΛN\Lambda_N (containing loss function and activation dependencies) is dependent on the data matrix XNX_N. Standard random matrix theory results like the generalized Marchenko-Pastur theorem typically require independence between XNX_N and ΛN\Lambda_N.

The authors tackle this by observing that the dependence diminishes as dd \rightarrow \infty. They propose a systematic decoupling procedure inspired by the Lindeberg interpolation principle. The general idea is to:

  1. Introduce a decoupled matrix where the dependency is removed (e.g., replacing XNX_N with an independent copy X~N\tilde{X}_N in ΛN\Lambda_N).
  2. Construct an interpolation process between the original and decoupled matrices.
  3. Analyze the difference in properties (like the Stieltjes transform) between the original and decoupled matrices by examining the derivative of the property with respect to the interpolation parameter.
  4. Bound this derivative, often using tools like Stein's Lemma (which leverages the Gaussian assumption on the data), to show the difference vanishes asymptotically.
  5. Apply standard random matrix theory results (like generalized Marchenko-Pastur theory) to the decoupled matrix, which is now amenable to such analysis.

For the output-layer Hessian in 1-hidden-layer networks with CE loss, a different approach is needed because the matrix dimension (mm) is fixed, not growing with dd or NN. The dependence structure is also more complex. For this case, the authors analyze the expectation of the entry-wise second moments of the Hessian blocks after an initial decoupling step using Lindeberg's principle to replace inputs modulated by WW with standard Gaussian variables.

Implementation Considerations and Applications:

  • Optimizer Design: The theoretical findings suggest that for tasks with a large number of classes (like LLMs), the Hessian is indeed strongly structured. This provides theoretical justification for using block-diagonal preconditioners, where blocks correspond to parameters associated with specific output neurons or hidden neurons. For instance, an optimizer could approximate HvvH_{vv} as block-diagonal and apply per-class preconditioning updates. For HwwH_{ww}, a per-hidden-neuron block-diagonal preconditioning could be considered.
  • Computational Cost: Calculating the full Hessian or even full blocks explicitly for large networks is computationally prohibitive. Practical implementation of optimizers benefiting from this structure would rely on efficient approximations, such as block-diagonal approximations derived from diagonal preconditioning methods (like Adam) or more sophisticated block approximations (like Shampoo). Hessian-vector products can also be used to estimate diagonal or block-diagonal entries efficiently.
  • Memory Reduction: The observed structure supports methods like Adam-mini, which exploit the relative smallness of off-diagonal blocks to reduce memory usage for optimizer states.
  • Limitations: The theory focuses on random initialization and simple architectures (linear, 1-hidden-layer) with specific data assumptions (Gaussian). Extending this to deeper, more complex architectures (Transformers, CNNs) and real-world, non-Gaussian data, and understanding how the structure evolves throughout training remains an open challenge. The "dynamic force" observed empirically is not yet theoretically characterized.
  • Debugging/Understanding: Visualizing Hessian blocks (as shown in the figures) can be a valuable tool for debugging models and understanding training behavior, even if full theoretical analysis isn't available for the specific architecture or data. Code for calculating and visualizing these blocks can be adapted from existing Hessian libraries.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import torch
import torch.nn as nn
import torch.optim as optim

def calculate_full_hessian(model, loss_fn, data, targets):
    # This is generally computationally expensive for large models
    # Use libraries like functorch for efficient Hessian computation if possible
    params = list(model.parameters())
    loss = loss_fn(model(data), targets)
    grad = torch.autograd.grad(loss, params, create_graph=True)
    hessian_blocks = {}
    param_names = [name for name, _ in model.named_parameters()]

    # Example: Calculate Hessian block for a specific parameter, e.g., output layer weights
    # This requires iterating through parameter dimensions, still costly
    # A more practical approach might involve Hessian-vector products or approximations

    # Simplified conceptual visualization (requires obtaining Hessian blocks)
    # H is the full Hessian matrix (example shape: P x P)
    # P is total number of parameters
    # Block structure depends on how parameters are grouped (e.g., by layer, by neuron/output class)
    # Example: If output layer weights are params[1] with shape (C, M)
    # H_vv would be a block within the full Hessian corresponding to params[1]
    # Assuming parameters are flattened and concatenated in a specific order

    # For visualization (using matplotlib or seaborn)
    # import matplotlib.pyplot as plt
    # import seaborn as sns
    # sns.heatmap(hessian_block_abs, cmap='viridis')
    # plt.title("Absolute Hessian Block (e.g., Output Layer)")
    # plt.show()

    # For the structure visualization in the paper, the authors arrange parameters
    # by blocks corresponding to groups of weights (e.g., weights for neuron 1, neuron 2, ..., output class 1, ...)
    # This requires careful indexing and slicing of the full Hessian matrix
    # or computing specific off-diagonal blocks directly using second derivatives.
    pass # Placeholder for complex Hessian calculation/visualization logic

class SimpleNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        return self.fc2(self.relu(self.fc1(x)))

In summary, the paper provides the first rigorous theoretical explanation for the near-block-diagonal Hessian structure in simple neural networks at initialization, identifying the number of classes CC as a key factor. It introduces valuable techniques from random matrix theory for analyzing dependent random matrices in this context. While limited to specific conditions, the findings offer crucial theoretical support for the design and empirical success of optimization methods tailored to structured Hessians, particularly in large-scale classification problems like those faced by LLMs.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 posts and received 339 likes.