Papers
Topics
Authors
Recent
2000 character limit reached

State Polytopes Related to Two Classes of Combinatorial Neural Codes

Published 20 Aug 2018 in math.CO and math.AC | (1808.06721v2)

Abstract: Combinatorial neural codes are $0/1$ vectors that are used to model the co-firing patterns of a set of place cells in the brain. One wide-open problem in this area is to determine when a given code can be algorithmically drawn in the plane as a Venn diagram-like figure. A sufficient condition to do so is for the code to have a property called $k$-inductively pierced. Gross, Obatake, and Youngs recently used toric algebra to show that a code on three neurons is $1$-inductively pierced if and only if the toric ideal is trivial or generated by quadratics. No result is known for additional neurons in the same generality, part of the difficulty coming from the large number of codewords possible when additional neurons are used. In this article, we study two infinite classes of combinatorial neural codes in detail. For each code, we explicitly compute its universal Gr\"obner basis. This is done for the first class by recognizing that the codewords form a Lawrence-type matrix. With the second class, this is done by showing that the matrix is totally unimodular. These computations allow one to compute the state polytopes of the corresponding toric ideals, from which all distinct initial ideals may be computed efficiently. Moreover, we show that the state polytopes are combinatorially equivalent to well-known polytopes: the permutohedron and the stellohedron.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.