Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters (2001.01275v2)

Published 5 Jan 2020 in cs.CV

Abstract: In this paper, we investigate the empirical impact of orthogonality regularization (OR) in deep learning, either solo or collaboratively. Recent works on OR showed some promising results on the accuracy. In our ablation study, however, we do not observe such significant improvement from existing OR techniques compared with the conventional training based on weight decay, dropout, and batch normalization. To identify the real gain from OR, inspired by the locality sensitive hashing (LSH) in angle estimation, we propose to introduce an implicit self-regularization into OR to push the mean and variance of filter angles in a network towards 90 and 0 simultaneously to achieve (near) orthogonality among the filters, without using any other explicit regularization. Our regularization can be implemented as an architectural plug-in and integrated with an arbitrary network. We reveal that OR helps stabilize the training process and leads to faster convergence and better generalization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ziming Zhang (59 papers)
  2. Wenchi Ma (11 papers)
  3. Yuanwei Wu (21 papers)
  4. Guanghui Wang (179 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.