We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or \emph{adaptive quantum networks}. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.
Deep Dive into Backpropagation training in adaptive quantum networks.
We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or \emph{adaptive quantum networks}. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.
Artificial neural networks are routinely applied to resolve unstructured or multivariate machine learning problems such as high-speed pattern recognition, image processing and associative pattern matching tasks. Due to their novelty, however, quantum neural networks remain relatively uncharted in the artificial intelligence and quantum algorithms communities. Several groups [1,2,3,4] have outlined preliminary quantum network architectures; each novel approach contributes significant insights towards application methodology, alternative implementations and underlying modal interpretation -however, widespread and effective universal implementation of quantum neural networks remains an open research question, as both theoretical and experimental toolsets are still in the incipient stages of development and maturation.
A common underlying thread shared by quantum network proposals to date is that each implements superposition of neural transition functions upon a single, fixed topological foundation. In this letter, we expand upon a novel framework initially introduced in [5], which diverges from prior network models by fine-tuning not only optimization of neural transition functions -but by fully reconfiguring the connective physical topology of the quantum network itself. The mathematical formalism employed for this approach descends from the Rota algebraic spatialization procedure of evolving reticular quantum structures, which was initially developed in [6] to address superposed topological manifolds of spacetime foam as described in quantum gravity. Subsequently, not only neuron weighting and transition functionsbut also linear superposition of the network topology itself -are subject to training and revision within this coherent state space.
We formally incorporate the standard backpropagation training algorithm initially introduced by Werbos in [7]. Repeated iteration of training series results in convergence of sample output to a stable network attractor corresponding to the lowest energy configuration space between given input and desired output layer. Following convergence to this minimum, the superposed linear network is converted upon measurement to a conventional, classical neural network by consequent application of the Rota algebraic projection formalism.
Traditional computers execute algorithms -that is, they follow a specific set of instructions to arrive at a solution to a given problem. Artificial neural networks, by contrast, learn by trial and error: training through example. In a specific class of neural networks -multilayered, feedforward neural net-works -signals are allowed to propagate only forward: there is no feedback process in the training phase. These connection patterns are formally classified as mathematical structures known as posets -partially ordered sets, or directed acyclic graphs -dags.
In this letter, we convert the primary component of artificial neural networks -directed acyclic graphs -into a set of linear matrices. The training optimization protocol is then reclassified in terms of unitary matrices and matrix operations. The result of training optimization is then a matrix, rather than a directed acyclic graph. In order to recover the initial graph structure upon derivation of the appropriate solution, the Rota algebraic spatialization procedure [8] is applied to recover the appropriate graph.
Herein we use the term adaptive neural networks. However, the primary objects subject to revision and training are operators in linear spaces -which can be realized, for instance, as quantum observables, rather than as explicitly defined neural networks. In our paper, we shall use the term ’linear’ under two separate contexts. The first definition implies restriction to linear artificial neural networks, as we defer application of nonlinear network optimization problems to a subsequent manuscript. In this letter, we focus solely upon optimized approach and applications of linear neural networks.
The second usage of linearization is the standard linear formalism central to quantum mechanics. This is the basis of our mathematical approach, and is outlined in further detail. The topology of a feedforward artificial neural network, N , is described by the template matrix A of the appropriate directed acyclic graph, which is formed as follows:
where * stands for a wildcard -any number, and the set of such numerical matrices form an algebra [10] as it is closed under multiplication. The main property of A is that the synaptic weights ‘follow’ it -namely, if A jk = 0, then w jk = 0. Returning to picture (2) of signal propagation in N , taking various matrices A -and allowing them only to comply with the template matrix (1) -we form various products (2). The resulting set of matrices is called the Rota algebra of M. It can be verified that this set is closed under sums and matrix products, and thus qualifies as a closed algebra. This description is explicated under greater detail
…(Full text truncated)…
This content is AI-processed based on ArXiv data.