AutoClip: Adaptive Clipping & Vision-Language Inference
- AutoClip is an adaptive algorithm that calibrates gradient clipping thresholds and vision-language prompt weights using data-driven statistics.
- It replaces static hyperparameters with percentile-based thresholds to yield smoother optimization, improved SI-SDR, and minimal manual tuning.
- AutoCLIP for vision-language inference dynamically derives per-image prompt weights, consistently enhancing classification accuracy with negligible overhead.
AutoClip refers to two distinct adaptive algorithms developed for neural network training and inference: one for adaptive gradient clipping (AutoClip for source separation networks) and another for auto-tuning zero-shot classifiers in vision-LLMs (AutoCLIP for vision–language prompt ensembling). Both approaches autonomously adapt critical hyperparameters—clipping thresholds and prompt ensemble weights—based on data-driven statistics, offering improved stability or accuracy with minimal manual tuning. The following entry details both algorithms, each rooted in a different subdomain, unified by the principle of automatic, empirical adaptation.
1. Adaptive Gradient Clipping via Percentile Norm Estimation
AutoClip, as introduced for gradient clipping in source separation networks, is a data-driven scheme that replaces the manually selected global norm threshold in clip-by-norm procedures with an adaptively chosen percentile-based bound. At each optimization step , the clipping threshold is set to the -th percentile of the historical gradient norm statistics accumulated up to that point, where is a user-chosen percentile parameter. Specifically, for loss and current gradient , the update is
with based on , the empirical history of all prior gradient norms. Only gradients with norm in the top are clipped. This mechanism obviates the need to tune the absolute clipping threshold, instead requiring only a single percentile parameter (Seetharaman et al., 2020).
2. Implementation of AutoClip in Neural Network Training
AutoClip’s algorithmic simplicity allows insertion into existing training routines. At each iteration, the gradient norm is appended to , the running percentile is computed, and gradients are rescaled only if their norm exceeds this adaptive threshold. The following pseudocode summarizes the core logic:
1 2 3 4 5 6 7 8 9 10 11 12 |
G_history = [] # store gradient norms p = 10 # percentile cutoff for t in range(1, T+1): X_t = next_minibatch() loss = compute_loss(X_t, θ) grads = backprop(loss, θ) grad_norm = norm(grads) G_history.append(grad_norm) η_c = percentile(G_history, p) if grad_norm > η_c: grads = grads * (η_c / grad_norm) θ = optimizer_step(θ, grads) |
This method decouples hyperparameter sensitivity from problem-specific scale, integrates with optimizers such as SGD or Adam, and generalizes across domains.
3. Empirical Evaluation in Source Separation Networks
AutoClip was evaluated on the WSJ0-2mix speech separation dataset using a 4-layer bidirectional LSTM architecture with multiple objective functions: deep clustering (), whitened k-means (), mask-inference phase-sensitive loss (), multi-task Chimera (), and time-domain SNR (). Models were trained with Adam (lr=), batch size 25, sequence length 400, for 100 epochs.
The effect of percentile on SI-SDR test performance (dB) is summarized as follows:
| Loss | |||||||
|---|---|---|---|---|---|---|---|
| 10.7 | 10.7 | 10.8 | 10.7 | 10.7 | 10.5 | 10.2 | |
| 11.1 | 11.2 | 11.0 | 11.0 | 11.0 | 11.0 | 10.8 | |
| 10.0 | 10.3 | 10.2 | 9.9 | 9.2 | 8.7 | 8.5 | |
| 11.2 | 11.3 | 11.3 | 11.3 | 11.2 | 11.1 | 10.9 | |
| 9.9 | 10.2 | 10.4 | 10.3 | 9.9 | 9.5 | 8.3 |
Performance substantially deteriorates without clipping (), particularly for and (up to 2 dB loss). Percentile is near-optimal across objectives and robust to extreme settings, outperforming prior static-threshold baselines (Seetharaman et al., 2020).
4. Dynamics and Loss Landscape Analysis
AutoClip’s effect on optimization dynamics was probed by tracking step size , empirical Lipschitz constant of the gradient, and gradient norm. With AutoClip (), the step size trajectory is smoother and exhibits built-in warmup and decay behavior. The Pearson correlation (versus without clipping) between gradient norm and local smoothness demonstrates that AutoClip confines the optimizer to flatter regions of the loss landscape. Restricting updates with large gradients mitigates erratic jumps and enhances generalization (final SI-SDR improved from 8.1 dB to 9.2 dB under ) (Seetharaman et al., 2020).
5. General Applicability, Simplicity, and Broader Relevance
The percentile-based thresholding in AutoClip is not tied to a specific optimizer or loss function. It is optimizer-agnostic, scale-invariant, and requires only a single percentile parameter—no manual tuning of absolute clipping thresholds on a per-network or per-task basis. Applicability extends beyond audio source separation to language modeling (where exploding gradients may arise), image classifiers (to avoid sharp minima), and RL or any stochastic optimization scenario (Seetharaman et al., 2020). The method is “set-and-forget,” implemented with a running list or histogram of gradient norms and a percentile computation.
6. AutoCLIP for Vision-LLM Inference
In the domain of zero-shot vision-language classification, AutoCLIP introduces automatic tuning of ensemble prompt weights per image at inference. Given a set of prompt templates per class, the baseline CLIP strategy uniformly averages class-descriptor similarities for classification. AutoCLIP instead derives per-image weights over the prompts using statistics of descriptor-image cosine similarities .
For each image, aggregated match qualities over all classes are computed by a smooth-max (logsumexp with temperature ). The weights are then produced via a softmax over , balancing prompt informativeness. The final class score is , and classification proceeds by .
This approach yields consistent accuracy improvements across CLIP-style backbone models, datasets, and prompt ensemble strategies, with gains of 0.5–3 percentage points typical for sufficiently large and negligible computational overhead. AutoCLIP is suitable whenever prompt-ensemble effects are nontrivial and can be implemented as a short wrapper around the standard inference pipeline (Metzen et al., 2023).