Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparsely Grouped Input Variables for Neural Networks (1911.13068v1)

Published 29 Nov 2019 in cs.LG and stat.ML

Abstract: In genomic analysis, biomarker discovery, image recognition, and other systems involving machine learning, input variables can often be organized into different groups by their source or semantic category. Eliminating some groups of variables can expedite the process of data acquisition and avoid over-fitting. Researchers have used the group lasso to ensure group sparsity in linear models and have extended it to create compact neural networks in meta-learning. Different from previous studies, we use multi-layer non-linear neural networks to find sparse groups for input variables. We propose a new loss function to regularize parameters for grouped input variables, design a new optimization algorithm for this loss function, and test these methods in three real-world settings. We achieve group sparsity for three datasets, maintaining satisfying results while excluding one nucleotide position from an RNA splicing experiment, excluding 89.9% of stimuli from an eye-tracking experiment, and excluding 60% of image rows from an experiment on the MNIST dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Beibin Li (16 papers)
  2. Nicholas Nuechterlein (3 papers)
  3. Erin Barney (2 papers)
  4. Caitlin Hudac (1 paper)
  5. Pamela Ventola (17 papers)
  6. Linda Shapiro (23 papers)
  7. Frederick Shic (7 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.