Usually, mathematical objects have highly parallel interpretations. In this paper, we consider them as sequential constructors of other objects. In particular, we prove that every reflexive directed graph can be interpreted as a program that builds another and is itself builded by another. That leads to some optimal memory computations, codings similar to modular decompositions and other strange dynamical phenomenons.
In this paper, we will deal with matrices and finite directed graphs G = (V, A) where V is a finite list (x 1 , x 2 , . . . , x n ) of vertices and A is a subset of V × V of arcs.
Usually, in classical mathematics, matrices are highly parallel objects. For computing the transformation of a vector X by a linear mapping X := E(X), a mathematician first keeps the initial vector X in mind or in a safe place, and then computes the successive images of each component of X by the projections of E. Equivalently, he can also compute Y := E(X) and then X := Y . In both cases, one seems to have to build a copy of the initial vector X in order to preserve its initial values for the whole computation that will modify these values. Hence, the data size for this kind of computation processes is twice the size of the input data X. A motivation of this paper is to show that one can compute the transformation X := E(X) without any copy. In [1], we proved a similar result for the computation of boolean mappings. Here, we will interpret matrices as sequential programs that perform sequences of assignments on the components of the initial vector. In general, the mapping computed by this way is not the usual linear mapping represented. Now, a matrix admits a usual parallel interpretation and a sequential interpretation. We are going to investigate this duality.
Definition. (matrices). Let K be a field. Given a square matrix M of M n,n (K), M i is the i-th row vector of M and M i,j is the j-th component of the row M i . The matrix M is said regular if it only has 1s on its diagonal, i.e., M i,i = 1 for every i.
First, we recall the classical interpretation of a matrix in linear algebra.
Definition. (parallel mapping). Let K be a field. Every square matrix M of M n,n (K) represents a linear mapping M # from K n to K n that we will call the parallel mapping of M and defined by the following. For every X = (x 1 , . . . , x n ) ∈ K n , the parallel image of X by M is the vector
This mapping M # can be computed by the following straight-line program :
Input : X = (x 1 , . . . , x n )
For i from 1 to n do
Sequentializing.
Now, we define a dual interpretation of a matrix in linear algebra such that the image of a vector X can be computed via a sequence of linear transformations of this vector X and only using this vector for the whole computation. We obtain an “in situ” straight-line program.
Definition. (sequential mapping). Let K be a field. Every square matrix M of M n,n (K) represents a linear mapping M ↓ from K n to K n called the sequential mapping of M defined by the following. For every X = (x 1 , . . . , x n ) ∈ K n , the sequential image of X by M is the resulting vector computed by the following straight-line program denoted M π :
Input : X = (x 1 , . . . , x n )
For i from 1 to n do
As we said above, one only uses the input vector X for the computation of the sequential image, whereas in the standard parallel interpretation, one uses a second vector Y (in order to preserve the values of the input vector X).
For example, for K = R, the sequential mapping M ↓ of the matrix
The sequential interpretation of a matrix leads to a natural notion in the context of directed graphs.
. Let G = (V, A) be a finite directed graph where V is a finite list (x 1 , x 2 , . . . , x n ) of vertices and A is a subset of V × V of arcs. We say that G sequentially constructs a directed graph G ′ when their respective adjacency matrices M, M ′ in M n,n (F 2 ) satisfy :
That is a kind of decomposition of a graph. Instead of working with a graph G, one can consider a sequential constructor of G or the graph that G constructs. We will see in the sequel some advantages. We are going to study the relations between these graphs and first of all, their existence. Given a graph G, is there another graph G ′ which is a sequential constructor of G ? We will see that the answer is NO. However, if one assumes the graph G to be reflexive, the answer is YES.
In the formalism of matrices, observe that by definition, the parallel interpretation of M s is equal to the sequential interpretation of M . For the other direction, a natural question is :
given a matrix M , is there a matrix P with a sequential interpretation equal to the parallel interpretation of M ?
Unfortunately, the answer is NO in general. A minimal example for K = F 2 is :
If there were a matrix P such that P ↓ = M # , since P and M have necessarily the same first rows (i.e., P 1 = M 1 ), the matrix P must be some
and the sequential program P π :
should transform every vector (a, b) in (0, a) : that is not possible.
However, we are not very far from a positive answer with the following preliminary result.
Theorem 1. Let K be a field. For every linear mapping E from K n to K n with n > 0, the assignment X := E(X) is computed by a straight-line program P made of at most 2n-1 linear assignments of the components of X. Moreover, the n first steps of P form a program M π for a mat
This content is AI-processed based on open access ArXiv data.