Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Multilinear Forms: Bias, Correlation, and Tensor Rank (1804.09124v2)

Published 24 Apr 2018 in cs.CC

Abstract: In this paper, we prove new relations between the bias of multilinear forms, the correlation between multilinear forms and lower degree polynomials, and the rank of tensors over $GF(2)= {0,1}$. We show the following results for multilinear forms and tensors. 1. Correlation bounds : We show that a random $d$-linear form has exponentially low correlation with low-degree polynomials. More precisely, for $d \ll 2{o(k)}$, we show that a random $d$-linear form $f(X_1,X_2, \dots, X_d) : \left(GF(2){k}\right)d \rightarrow GF(2)$ has correlation $2{-k(1-o(1))}$ with any polynomial of degree at most $d/10$. This result is proved by giving near-optimal bounds on the bias of random $d$-linear form, which is in turn proved by giving near-optimal bounds on the probability that a random rank-$t$ $d$-linear form is identically zero. 2. Tensor-rank vs Bias : We show that if a $d$-dimensional tensor has small rank, then the bias of the associated $d$-linear form is large. More precisely, given any $d$-dimensional tensor $$T :\underbrace{[k]\times \ldots [k]}{\text{$d$ times}}\to GF(2)$$ of rank at most $t$, the bias of the associated $d$-linear form $$f_T(X_1,\ldots,X_d) := \sum{(i_1,\dots,i_d) \in [k]d} T(i_1,i_2,\ldots, i_d) X_{1,i_1}\cdot X_{1,i_2}\cdots X_{d,i_d}$$ is at least $\left(1-\frac1{2{d-1}}\right)t$. The above bias vs tensor-rank connection suggests a natural approach to proving nontrivial tensor-rank lower bounds for $d=3$. In particular, we use this approach to prove that the finite field multiplication tensor has tensor rank at least $3.52 k$ matching the best known lower bound for any explicit tensor in three dimensions over $GF(2)$.

Citations (8)

Summary

We haven't generated a summary for this paper yet.