Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons

Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of   Neurons
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We analyze linear independence of rank one tensors produced by tensor powers of randomly perturbed vectors. This enables efficient decomposition of sums of high-order tensors. Our analysis builds upon [BCMV14] but allows for a wider range of perturbation models, including discrete ones. We give an application to recovering assemblies of neurons. Assemblies are large sets of neurons representing specific memories or concepts. The size of the intersection of two assemblies has been shown in experiments to represent the extent to which these memories co-occur or these concepts are related; the phenomenon is called association of assemblies. This suggests that an animal’s memory is a complex web of associations, and poses the problem of recovering this representation from cognitive data. Motivated by this problem, we study the following more general question: Can we reconstruct the Venn diagram of a family of sets, given the sizes of their $\ell$-wise intersections? We show that as long as the family of sets is randomly perturbed, it is enough for the number of measurements to be polynomially larger than the number of nonempty regions of the Venn diagram to fully reconstruct the diagram.


💡 Research Summary

The paper studies the linear independence of rank‑one tensors formed by taking tensor powers of vectors that have been randomly perturbed. Building on the smoothed‑analysis framework of Bhaskara et al. (2014), the authors broaden the class of admissible perturbations from continuous Gaussian noise to a much larger family they call (δ, p)‑nondeterministic distributions. A distribution is (δ, p)‑nondeterministic if, for every coordinate and any interval of width 2δ, the conditional probability that the coordinate falls in that interval (given the other coordinates) is at most p. This definition captures both discrete perturbations such as independent bit‑flips on {0,1}ⁿ and continuous perturbations such as Gaussian noise of arbitrary variance.

The central technical contribution is the introduction of “echelon trees,” a high‑order analogue of Gaussian elimination, which allows the authors to prove that, with high probability, the set of tensors
a(u)=χ^{(1)}(u)⊗⋯⊗χ^{(ℓ)}(u)
is robustly linearly independent when each χ^{(i)}(u) is drawn from a (δ, p)‑nondeterministic distribution. Formally, if |U|≤(c n)^ℓ for a constant c<1, then the minimum singular value σ_min(A) of the matrix A whose columns are the flattened a(u) satisfies
P


Comments & Academic Discussion

Loading comments...

Leave a Comment