Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability (2204.10598v3)

Published 22 Apr 2022 in cs.CV and cs.LG

Abstract: Sparsely-gated Mixture of Expert (MoE) layers have been recently successfully applied for scaling large transformers, especially for LLMing tasks. An intriguing side effect of sparse MoE layers is that they convey inherent interpretability to a model via natural expert specialization. In this work, we apply sparse MoE layers to CNNs for computer vision tasks and analyze the resulting effect on model interpretability. To stabilize MoE training, we present both soft and hard constraint-based approaches. With hard constraints, the weights of certain experts are allowed to become zero, while soft constraints balance the contribution of experts with an additional auxiliary loss. As a result, soft constraints handle expert utilization better and support the expert specialization process, while hard constraints maintain more generalized experts and increase overall model performance. Our findings demonstrate that experts can implicitly focus on individual sub-domains of the input space. For example, experts trained for CIFAR-100 image classification specialize in recognizing different domains such as flowers or animals without previous data clustering. Experiments with RetinaNet and the COCO dataset further indicate that object detection experts can also specialize in detecting objects of distinct sizes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Svetlana Pavlitska (16 papers)
  2. Christian Hubschneider (5 papers)
  3. Lukas Struppek (21 papers)
  4. J. Marius Zöllner (95 papers)
Citations (10)