Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modular Networks: Learning to Decompose Neural Computation (1811.05249v1)

Published 13 Nov 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end. In contrast to existing approaches, training does not rely on regularization to enforce diversity in module use. We apply modular networks both to image recognition and LLMing tasks, where we achieve superior performance compared to several baselines. Introspection reveals that modules specialize in interpretable contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Louis Kirsch (21 papers)
  2. Julius Kunze (8 papers)
  3. David Barber (54 papers)
Citations (104)

Summary

We haven't generated a summary for this paper yet.