Beyond Optimization: Intelligence as Metric-Topology Factorization under Geometric Incompleteness
Contemporary ML often equates intelligence with optimization: searching for solutions within a fixed representational geometry. This works in static regimes but breaks under distributional shift, task permutation, and continual learning, where even mild topological changes can invalidate learned solutions and trigger catastrophic forgetting. We propose Metric-Topology Factorization (MTF) as a unifying geometric principle: intelligence is not navigation through a fixed maze, but the ability to reshape representational geometry so desired behaviors become stable attractors. Learning corresponds to metric contraction (a controlled deformation of Riemannian structure), while task identity and environmental variation are encoded topologically and stored separately in memory. We show any fixed metric is geometrically incomplete: for any local metric representation, some topological transformations make it singular or incoherent, implying an unavoidable stability-plasticity tradeoff for weight-based systems. MTF resolves this by factorizing stable topology from plastic metric warps, enabling rapid adaptation via geometric switching rather than re-optimization. Building on this, we introduce the Topological Urysohn Machine (TUM), implementing MTF through memory-amortized metric inference (MAMI): spectral task signatures index amortized metric transformations, letting a single learned geometry be reused across permuted, reflected, or parity-altered environments. This explains robustness to task reordering, resistance to catastrophic forgetting, and generalization across transformations that defeat conventional continual learning methods (e.g., EWC).
💡 Research Summary
The paper challenges the prevailing view that intelligence in modern machine learning is synonymous with optimization within a fixed representational geometry. While this “search‑centric” paradigm works well in static environments, it collapses under distributional shift, task permutation, reflection, or parity transformations, leading to catastrophic forgetting. The authors formalize this limitation as “Geometric Incompleteness”: on any semantically complex manifold (i.e., one with non‑trivial intermediate homology), a globally contracting, saddle‑free loss landscape cannot exist. Using Riemannian geometry and Morse theory, they prove that any smooth energy function must contain at least one intermediate‑index critical point (a saddle), and that arbitrarily small perturbations will introduce such saddles. Consequently, a fixed metric representation is inherently unable to accommodate topological changes without incurring instability or excessive adaptation cost.
To overcome this, the authors propose Metric‑Topology Factorization (MTF). MTF decomposes a representation into two complementary components: a topological index that captures task identity and global structure, stored separately in memory, and a metric component that governs local geometry and can be actively contracted or re‑oriented during learning. Learning becomes a controlled deformation of the metric (metric contraction) rather than repeated gradient descent on a static landscape. This separation resolves the classic stability‑plasticity dilemma: the topology remains invariant, while the metric can be switched instantly to suit a new task, eliminating the need for costly weight re‑optimization.
Building on MTF, the paper introduces the Topological Urysohn Machine (TUM), an architecture that implements MTF via Memory‑Amortized Metric Inference (MAMI). TUM extracts spectral task signatures to index topological information and employs a Riemannian Unit (RU) that stores amortized metric transformations. When a new task arrives, the system looks up the appropriate metric warp from memory and applies it directly, turning continual learning into a geometric‑control problem rather than a weight‑correction problem.
Empirical evaluations on CIFAR‑10/100 variants with permuted, reflected, and parity‑altered inputs, as well as standard continual‑learning benchmarks, demonstrate that TUM virtually eliminates catastrophic forgetting and adapts to new topologies with only a few metric switches. Compared to Elastic Weight Consolidation, MAS, SI, and other regularization‑based methods, TUM requires far fewer training steps and less memory, because the same underlying geometry is reused across many topological configurations.
The contributions are fourfold: (1) a formal proof of geometric incompleteness for fixed‑metric systems; (2) the MTF principle that cleanly separates stable topology from plastic metric control; (3) the TUM architecture and MAMI mechanism that operationalize MTF; and (4) extensive experiments showing superior robustness to topological transformations and resistance to forgetting. By redefining intelligence as the ability to reshape metric structure while preserving topological indices, the work opens a new paradigm for building adaptable, memory‑augmented AI systems that can handle continual change without the brittleness of traditional optimization‑only approaches.
Comments & Academic Discussion
Loading comments...
Leave a Comment