Reasoning about Linguistic Regularities in Word Embeddings using Matrix Manifolds

Reasoning about Linguistic Regularities in Word Embeddings using Matrix   Manifolds
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Recent work has explored methods for learning continuous vector space word representations reflecting the underlying semantics of words. Simple vector space arithmetic using cosine distances has been shown to capture certain types of analogies, such as reasoning about plurals from singulars, past tense from present tense, etc. In this paper, we introduce a new approach to capture analogies in continuous word representations, based on modeling not just individual word vectors, but rather the subspaces spanned by groups of words. We exploit the property that the set of subspaces in n-dimensional Euclidean space form a curved manifold space called the Grassmannian, a quotient subgroup of the Lie group of rotations in n- dimensions. Based on this mathematical model, we develop a modified cosine distance model based on geodesic kernels that captures relation-specific distances across word categories. Our experiments on analogy tasks show that our approach performs significantly better than the previous approaches for the given task.


💡 Research Summary

The paper introduces a novel framework for solving word analogy tasks by modeling groups of related words as low‑dimensional subspaces rather than as individual vectors. Recognizing that different linguistic relations (e.g., singular‑plural, past‑present, country‑currency) exhibit distinct geometric transformations in the embedding space, the authors propose to represent each relation by two subspaces: one for the “head” words (A) and one for the “tail” words (B). These subspaces are obtained via dimensionality reduction (PCA) from pre‑trained word embeddings (either Skip‑Gram with Negative Sampling or PPMI‑SVD), yielding orthonormal basis matrices (P_H) and (P_T) of size (D \times d) where (D) is the embedding dimension and (d \ll D).

The set of all (d)-dimensional subspaces in (\mathbb{R}^D) forms the Grassmannian manifold (\mathcal{G}(d,D)). The authors exploit this manifold structure by computing the principal angles (\theta_i) between the head and tail subspaces via singular value decomposition. Using these angles, they construct a geodesic flow kernel (GFK) that integrates over the shortest‑path geodesic connecting (P_H) and (P_T) on the manifold. The kernel (G_R) is a positive‑definite matrix that captures the entire continuum of intermediate subspaces between the two relation endpoints.

With (G_R) in hand, a relation‑specific similarity measure is defined as a modified cosine distance: \


Comments & Academic Discussion

Loading comments...

Leave a Comment