The entropy functional, the information path functionals essentials and their connections to Kolmogorovs entropy, complexity and physics
The paper introduces the recent results related to an entropy functional on trajectories of a controlled diffusion process, and the information path functional (IPF), analyzing their connections to the Kolmogorov's entropy, complexity and the Lyapuno…
Authors: Vladimir S. Lerner
The entropy functional, the information path func tional’s essentials a nd their connections to Kolmogorov’s entropy, co mplexity and physics Vladimir S. Lerner 13603 Marina Pointe Drive, Suite C-608, Marina Del Rey, CA 90292, USA, vslerner@yahoo.com Abstract The paper introduces the recent results related to an entropy functional on trajectories of a controlled diffusion process, and the information path func tional (I PF), analyzing their connections to the Kolmogorov’s entropy, complexity and the Lyapunov’s char acteristics. Considering the IPF’s essentials and specifics, the paper studies the singularities of the IPF extremal e quations and the created invariant relations, which both are useful for the solution of important m athematical and applied problem s. Keywords: Additive functional; Entropy; Singu larities, Natural Border Problem; Invariant Introduction The entropy functional, defined on a Markov diffusion process, plays an important roll in theory of information and statistical physics [1-3], inform ational m acrodynamics and control systems [4]. However in the known references, we did not find th e mathematical results related to the entropy functional’s connections with the Kolmogorov’s entr opy, complexity, and the Lyapunov’s characteristics [5]. We analyze these connections through the information path functiona l (IPF) of controllable diffusion process, considering the IPF’s essentials and speci fics, including the singularit ies of the IPF extremal equations and the created invariant relations. The pa per is a part of the information path functional approach [6-8] in solving the problems of information evolutionary dynamics. Searching a law, which governs complex dynamics of interacting pr oce ss, leads us to the process’ statistical dynamics first, and then to finding a variation principle that, according to R. Feynman ( The character of physical law ) , might describe regularities of such dynam i cs, possibly at a macroscopic level. Sec.1 introduces the entropy functional exp ressed via an additiv e functional of controllable diffusion process and the parameters of corresponding stochast ic equation, applied to the IPF variation problem. Sec. 2. presents an essence of the IPF approach, considering bo th the problem statement and its formalization for a class of random systems, modeled by the solu tions of controlled Ito’s stochastic differential equations; d efines a probab ilistic measure of the functional’s distance between a current process’ trajectory and som e given process ; applies the entropy functional , defined on these solutions- trajectories, and pr esents the dynamic approximation of both this probabilis tic m easure and the conditional entropy functional by the corresponding path functionals . We illustrate a specific of the formulated variation problem’s solution using both the Kolmogorov (K) eqs. for the functional of a Markov process a nd the Jacobi-Hamilton (J-H) eq. for the dynam ic approximation of this functiona l as the IPF. Because the f ulfillment of b oth J-H and K equations in the same field’s region of a space is possible at som e “punched” discretely selected points (DP), the ex tremal trajectory, solving th e variation problem (VP) is divided on the extremal’s segments at each DP. Between the extremal segm ents (a t DP) exists a “ window ”, where the random information affects the dynamic process on the extremals, creating it’s piece-wise dependency upon the observed data. 1 The solved VP allows the finding of both a class of the dynamic (macro) models (as the equation of the IPF extremals) for the considered class to the random systems (at the system’s microlevel), and the optimal control functions for this class of the dynamic models (by solving the optimal control’s synthesis problem ). The synthesized optim al controls st art at the beginning of each segment, act along the segment, and connect the segments in the m acrodynamic optimal process, while the discrete interval of the applied control is associated with the segm ent’s length betw een the DP. These specifics allow providing the identification of the model’s dynamic operator at each DP for each extrem al segment in real time under the optimal control action and during the object’s current motion. Becau se the proofs of formulated theorems have published, we illustrate here the theorems results. Sec.3 applies the obtained results for a joint solu tion of the optim al control and the identification problems, providing also the basic theorems for th e creation of cooperative information dynamics. In sec. 4 we consider the IPF m odel’s family of the inte racting trajectorie s forming the com plex system’s state consolidation and aggregation in a cooperative hierarchical information network (IN). The IN’s structure is based on the identified model’s invari ants, following from VP. The IN’s formation can proceed concurrently during the system ’s optimal m otion, combined with optimal control and the operator’s identification. Sec.5 studies the IPF macromodel’s singular points and the singular trajectories, and the invariants following from their connections. The singularities arise at the DP-wi ndows with shortening the initial model’s dimension and the potential chaotic dynamics’ bifurcations. Sec.6 analyzes the solutions of a natural border problem for the IPF unde r the control actions. It is shown that both the model extremals and the m odel’s singul ar trajectories belong to these solutions, if the segment’s controls are bound by the found relations. We also establish the invariant conditions, as the model’s field’s functions, being th e analogies of the information conservations laws. Finally, in sec.7 we study the c onnections between the entropy’s (inf ormation) path functional and the Kolmogorov’s entropy of a dynamic system, betw een the Kolm ogorov’s and the macrodynamic complexities, and the relations to ph ysics. 1. The entropy functional Let have the n -dimensional controlled stochastic Ito differential equation [9]: d ˜ x t = a ( t , ˜ x t , u ) dt + t σ ( t , ˜ x t ) d ξ t , ˜ x s = η , t ∈ [ s , T ] = Δ , s ∈ [0, T ] 1 R + ⊂ (1.1) with the standard limitations [ 9,10] on the functions of shift a ( t , ˜ x t , u ), diffusion t σ ( t , ˜ x t ), and Wiener process ( , ) t t ξ ξω = , which are defined on a probability space of the elem entary random events ω ∈ Ω with the variables locate d in n R ; ( ) t x xt = is a diffusion process with tran sition probabilitie s , and is a (, Ps , , ) x t B (,) st Ψ σ -algebra created by the events { () x B τ ∈ }, ; st τ ≤ ≤ ) , PP = , () A sx sx are the corresponding conditional probability distributions on an extended ( , s Ψ ∞ . Let’s consider the transform ation of an initial p rocess t x , with transition probabilities , to some diffusion process (, , , ) Ps x t B t ς , with transiti on probabilities , () ( , , , ) exp{ ( )} ( ) t ts xt B Ps t B P d s x ς ϕω ω ∈ =− ∫ , (1.2) where ( ) tt ss ϕ ϕω = is an additive func tional of process ( ) t x xt = [11,12], measured regarding (,) st Ψ at any st τ ≤≤ with probability 1, and tt ss τ τ ϕ ϕϕ = + . 2 Then at this transformation, the tr ansitional probability functions (1.2) determ ine the corresponding extensive distributions on ( , , , ) t Ps t B ς ,, () sx sx PP A = ( , ) s Ψ ∞ with the density measure , , ( ) exp{ ( )} sx t s sx P p P ω ϕω == − . (1.3) Using the definition of a conditional entropy [1] of process t x regarding process t ς : , ( / ) { ln[ ( )]} tt s x Sx E p ς ω =− , (1.4) where , s x E is a conditional m athematical expectation, we get , (/) { ( ) } t tt s x s Sx E ς ϕω = . (1.5) Let the transformed process be (, ) t t s vd ν ν ς σ ζζ = ∫ having the same diff usion matrix as the initial process , but the zero drift. Then the above ad ditive functional at its fixed upper lim it T acquires the form [11,12]: 11 1 / 2 (, ) ( 2 (, ) ) (, ) ( (, ) (, ) () TT Tu T u u st t t t t ss at x b t x at x d t t x at x d t ϕ σξ −− =+ ∫∫ , 2 , (1.6) ( , ) ( , ) ( , ) 0 T bt x t x t x σσ => where . (1.6a) 1 , {( ( , ) ( , ) ( ) } 0 T u sx t t s Et x a t x d t σξ − = ∫ Finally we get the inform ation entropy functiona l expressed via parameters of the initial controllable stochastic equation (1.1): 1 , ( / ) 1 / 2 { (, ) ( 2 (, ) ) (, ) } T uT u tt s x t t t s S x E a tx b tx a tx d t ς − = ∫ , . (1.7) where for the variation problem in [4,6] relation plays a role of a Lagrangian . 1 ,, ˆ [ (, ) ( 2 (, ) ) (, ) ] [ (, ) ] uT u sx t t t sx t E a tx b tx a tx E L tx L − == , ˆ [] sx LE L = For a positive quadratic form in (1.7), the above information entropy is a positive. Example . Let’s have a single dimensional eq . (1.1) with the shift function at the given control function , and the diffusion ( ) ( ) u au t x t = ( ) t uu t = ( ) t σ σ = . Then the entropy f unctional has a view , (1.8a) 22 , [ ( ) s x u t x ς 2 (/) 1 / 2 ( ) ( ) ] T tt s Sx E t t d t − = ∫ σ from which at the nonrandom , ( ) ut ( ) t σ we get 22 2 2 1 , ( / ) 1 / 2 [ ( )( )[ ( ) ] ] 1 / 2 TT tt s x t t s ss Sx u t t E x t d t u r r d t ςσ − == ∫ − ∫ , (1.8b) where for the diffusion process holds true: 2 2( ) ( ) / t bt t d r d t r σ = == , 2 , [( ) ] s x Ex t r = s , and the functional (1.8b) is expressed via the pr ocess’ covariation f unctions s r t r , and known . t u This allows us to identify the entropy functional on an observed controlled process ( ) t x xt = by measuring the above covariatio n (correlation) functions. The n -dimensional form of functional (1.8b) follows directly from using the related n -dimensional covariati ons and the control. 3 2. An essence of the informat ion path functional approach The initial problem . Let’s have two random processes, one of them ( ) tt x xu = is a controlled process, being a solution of eqs (1.1), another one 1 t x is given as a programmed process, expressing a task for the controlled process (particularly, conve ying a performance criterion); these coul d also be any two random processes. The problem formalization. The control task can be formalize considering these pro cess’s δ − closeness by a probability measure 1 [( , ) ] tt Px x ρ δ Δ < , where 1 (, ) tt x x ρ Δ * ] is a metric distance in a Banach space, and requiring 1 ] [( , t SupP x [( , ) ) tt SupP x x ρ δρ Δ <→ δ Δ Ο < (2.1) or the closeness of the difference 1* () () tt t x ux x u −= to null-vector Ο . This problem we solve via the approximation of the random difference * t x by a dynamic process t x , which we call a macroprocess , defined in the space 1 K C (of the piece–wise-differentia ble, continuous non random f unctions), while considering t x , 1 t x , * t x as the corresponding microlevel processes. Such a bi-level’s mi cro- and m acro- description we apply for a complex system with th e above random processes at microlev el and dynamic disturbed processes at macrolevel with a d isturbance process t ζ for t x . The process’ description we concretize by modeling * t () x u by the solutions of a controlled stochastic di fferential equation Ito (analogous to (1.1)): ** * (, , ) (, ) u tt t t dx a t x u dt t x d t σ ξ =+ , * ts s * x x = = , (2.2) whose function of shift is given, and the diffusion component of the solution, which models the disturbance ( , , ) uu aa t •• = t ζ for t x , has the same function of diffusion ( , , ) t σ σ • • = as one in (1.1): (, . , ) t t s td υ ζ συ ζ = ∫ at [ ] t E ζ =Ο . (2.2a) Control (in both (1.1) and (2.2)) is formed as a function of time and dynamic variables ( u t t x ), defined by a feedback equation: u t = def u ( , t t x ), (2.2b) where at a fixed x ∈ R n , is a piece-wise continuous function of t (, ) ux • ∈ Δ , and at fixed t ∈ Δ , (, ) ut • is a continuous differentiable function, ha ving the limited second derivatives by x ∈ R n : ∀ x ∈ R n , ( , ) ux • ∈ K C ( Δ , U ) , , 1 (, ) ( , ) o ux C U K C • ∈Δ ∀ t ∈ Δ , (, ) ut • ∈ C 1 ( R n , U ) , (2.2c) being a piece-wise continuous function on , accordingly: KC 1 C u t ∈ K C ( Δ , U ) , , , lim ( , ) k k def to uu t τ τ + →+ = x lim ( , ) k k def to uu t τ τ − →− = x 1 \{ } om kk τ = Δ= Δ , k = 0 , .. ., m , τ k ∈Δ , τ o = 0 , τ m = T .(2.2d) (We assum e that the processes t x , 1 t x can also be modeled by the soluti ons of corresponding Ito stochastic equations, with the details in [13]). It was shown [14] that a Markov diffusi on process is a convenient mathematical m odel for the representation of a wid e class of m any dimensi onal random processes. Such processes and their nonstationary models in the class of stochastic differential equations and widely used in Statistical Physics, Irreversible Thermodynami cs, Theory Inform ation, and in controllable random processes theory [15,16, 2, 1,4, others]. The connection between the probabilities of above processes is expressed in the following form: ** * [( , ) ] [( , ) ] [( , ) ] [( , ) ] [( , ) ] [( , ) ] ( 2 . 3 ) tt t t t t t t t Px Px P Px x P x P ρδ ρ ζ δ ρ ζ δ ρ δ ρ ζ δ ρ ζ δ ΔΔ Δ Δ Δ Δ Ο< ≥ < Ο< ≥ < < Ο< 4 at ** [( , ) ] [( , ) ] [( , ) ] tt t t t t Px Px x P x ρζ δρ δ ρ ζ δ ΔΔ Δ <≥ < < . (2.3a) The proof, following from a triang le inequality and Markovian pr operties, is gi ven in [13]. From that, the initial pro blem (2.1) can be reduced to the following requirem ents * * [( , ) t tt x Sup P x x ] ρ δ Δ < (2.4a); * * [( , , ) ] t tt x Sup P x ρ ζδ Δ < (2.4b); [( , ) ] t tt x Sup P x ρζ δ Δ < (2.4c); at [ ] 1 Sup P ≤ i where the last two conditions can be joint in the form * * [( , , ) ] [( , ) t t tt t t x x Sup P x Sup P x ] ρ ζδ ρ ζ ΔΔ <⇒ < δ . (2.4d) Relation (2.4a) represents a probability cond ition of the identifica tion * t x via t x , while the pair in (2.4d) minimizes the deviation * t x from t ζ through a minimal deviation of t x (as a macromodel of * t x ) from t ζ and connects them. We assum e here that process t ζ also models an irremovable disturbance for t x . This is why for t x , to be a dynamic analog of * t x , we also require the fulfillment of (2.4c) and the connection of both probabilistic closeness’s by a mu tual ability to approxim ate t ζ (considered as a standard process by (2.4d)). According to this condition, the control , moving the difference * t x close to t ζ , also approximates t x with accuracy of t ζ and, therefore, leads to the approximation of * t x by t x , which redefines the initial con trol problem . For the evaluation of above probabilities’ conditions we use the Freidlin-Wentzel results [17] applying them to the considered con trol system in the form * 1 0 lim su p log [ ( , ) ] ( ) t tt t x Px x I n f S x ε ερ δ Δ ↓ <≤ − , (2.5) where (2.5a) 1 1 () 1 / 2 ( () ) ( 2 ) ( () ) T T u tt t t t t s Sx x a x b x a x d t − =− − ∫ u is a path functional along trajectory t x , which approximates the difference 1 tt * t x xx −= with a maxim al probability measure (2.4a); and 2 0 lim log [ ( , ) ] ( ) , t tt t x Sup P x Inf S x ε ερ ζ δ Δ ↓ <≤ − (2.5b) where (2.5c) 1 2 () 1 / 2 ( 2 ) T T tt t s Sx x b x d t − = ∫ t is a path functional which evaluates the deviation of the trajectory t x from t ζ . We get also * ** 3 0 lim ln [ ( , ) ] ( ), t tt t t x Sup P x Inf S x ε ερ ζ δ Δ ↓ <≤ − (2.6) where the probability’s loga rithm of the transform ation * t x to t ζ can be written thr ough the Radom-Nikodim density measures [12,1] * t x d d ζ μ μ of the above functions on a set B δ : ** ** * ln [ ( , ) ] ln ( ) ( ) [ln ] tt tt t t x x B dd Px x P d x E dd δ ζζ μ μ ρζ δ μ μ Δ <= = ∫ , B δ ={ * (, ) tt x ρζ δ Δ < } (2.7) and * * * [ l n] [ l n] ( / t t x tt x dd EE S x d d ζ ζ μμ ) ζ μ μ =− = − , (2.7a) 5 defines a conditional entropy * (/) tt Sx ζ of processes * t x regarding ζ t (which we connect below to functional ). For a Markov diffusion process, the density m easure is expressed th rough an additive functional * 3 () tt Sx T s ϕ of considered diffusion processes [11, 12], sec.1: * t x d d ζ μ μ = exp( ) T s ϕ − =exp[ − ( + 3 S *1 * (( ( , ) ( , ) T u t s tx a tx σ − ∫ t t d ζ )], = ( 3 S 3 S * t x ), (2.8a) 3 S = 1/ , 2 b = 2 ** 1 (, ) ( 2 (, ) ) (, ) T uT u tt s at x b t x at x d t − ∫ * t T σ σ ; (2.8b) and the entropy (2.7a) is defined via the a dditive functional (2.8b) in the form [1]: * 3 (/) [ ] tt Sx E S ζ = , (2.9) at . (2.9a) *1 * [( ( ( , ) ) ( , ) ] 0 T u tt t s Et x a t x d σ − = ∫ ζ Thus, relation (2.7) is defined by the conditional entropy of processes * t x regarding ζ t : , (2.10) ** * 3 ln [ ( , ) ] ( / ) ( / ) { [ ]} tt t t t t Sup P x Sup S x InfS x Inf E S ρζ δ ζ ζ Δ <= − = = and according to (2.6), the lowe r entropy level is limited by: ** 3 (/) ( ) tt t I nfS x InfS x ζ ≤− , . (2.10a) ** * 1 3 ( ) 1 / 2 ( ,) 2 ( ,) ( ,) T uu tt t t t s Sx at x b t x at x d t − = ∫ * Finally we come to the v ariation conditions 1 () t t x I nf S x , (2.11a) * * 2 (/) ( ) t t tt t x x I nf S x Inf S x ζ ⇒ (2.11b) whose fulfillments solve jointly the above problems of optimal control and the identification and determ ines the macroprocess as an extremal of the variation problem. Relation (2.11b) also connects the pa th functional approach to theory information and allows a dynam ic approximation of the entropy functio nal of diffusion process by the infor mation path functional (IPF). The specific of solution of the above variation problem (VP) we illustrate using condition (2.11b) in the form: (2.12) 23 min min min ( ), SSE S == 3 a 1 2 (, , ) , (, , ) 1 / 2 ( 2 ) T T s S L tx x d t L tx x x b x − == ∫ , (2.13) 3 (, , ) , T u s SL t x a d = ∫ t (, , ) u Lt xa = 1 1/ 2 ( ) ( 2 ) uT u ab − . (2.13a) The Jacobi-Hamilton (J-H) equati on [18] for the extremals ( ) t x xt = of functional (1.13) is 2 S H t ∂ −= ∂ ,( (2.13b) , , ) , T Hx X L t x x =− where X is a conjugate vector for x . Using the Kolmogorov (K) equation [5,12, 19, others] for the functional (2.13a) in a field for the Ma rkov diffusion process (from (2.2)) we get 2 1 33 3 2 1/ 2 ( ) (2 ) uu SS S ab a b tx x − ∂∂ ∂ −= + + ∂∂ ∂ T u a . (2.13c) According to condition (2.12) we require 6 3 2 S S tt ∂ ∂ −= − ∂∂ 3 2 , S S X xx ∂ ∂ == ∂∂ , (2.14) which leads to 1 3 1/ 2 ( ) (2 ) uu T S X aX b a b a tx − ∂ ∂ −= + + ∂∂ u . (2.14a) The fulfillment of both J-H and K equations in the sa me field’s region of a space is poss ible at some “punched” discretely selected points (DP) of the space 1 : , 1 , ..., m n i i R i ϕ τ = Γ= = ∪ m , (2.14b) where the field of functional (2.13) can coincide with the field of entropy functional (2.13a), defined on the microlevel’s diffusion process. Applying to (2 .14a) the Ham iltonian in eq.(2.13b) we have 2 () , S H t x XX ∂ ∂− ∂ ∂ == ∂∂ 3 2 () ( ) u S S tt a XX ∂ ∂ ∂− ∂− ∂∂ == ∂∂ , u x a = . (2.15) The equation for the conjugate vector we get using the Lagrange eq. for : 1 (, , ) 1 / 2 ( 2 ) T Lt x x x b x − = 1 (2 ) L X bx x − ∂ == ∂ . (2.16) The substitution (2.16) to (2.13b) brings the Hamiltonian (2.13b) to th e form 1 1/ 2 1 / 2 (2 ) TT Hx X x b − == x , (2.17) and according to (2.14), the equalization of both relations (2.17) and (2.14a): 1 3 2 1 / 2( ) ( 2 ) 1 / 2( ) ( 2 ) uT S S X aX b x b x x b x tx t − ∂ ∂ ∂ −= + + = −= ∂∂ ∂ 1 T − (2.18) determines the eqs. of a constraint imposed by the microolevel’s stoc hastics (according to (2.13a)): () () () () 0 u X aX b x ττ τ τ ∂ + ∂ = X , (2.18a) at which the J-H and K equations coinci de at DP (2.14b). After substituting (following from (2.15, 2.16)) at , the constrain t acquires the forms 2 u ab = 0 b ≠ () 2 () () T X XX x τ τ ∂ =− ∂ τ T X (2.18b); or ( ) ( ) ( ) ( ) u a τ στ στ τ = . (2.18c) From (2.14b), (2.18) follow that the constraint equations (2.18b,c), wh ich establish a conn ection between the microlevel’s diffusion and macrolevel ’s dynam ics, can be relevant only at these discrete points (DP), while macroequation u x a = acts along each extremals except the punched poi nts. This constrain allocates a set of the discrete states ( ) x x τ τ = for which IPF coincides with the entropy functional. The constrain corresponds to the operator’s equation 2 2 [] 0 , u LS x L a b x x τ ∂∂ == + ∂∂ , at 1 ( ) (( ) ) (( ) ) , kk Sx Sx Sx i n v τ τ τ + Δ= − = (2.18d) whose solutions allow classifying the punched points, considered to be the bor dered points of a diffusion process lim ( ) ( ) t x tx τ τ → = [12]. A bordered point ( ) x x τ τ = is attracting the only if f unction 1 () () } ( ) e x p { o x u x R xa =− ∫ y b y − d y , (2.18e) 7 defining the general solutions of (2.1 8d), is integrable a t a locality x x τ = , satisfying the condition | . (2.18f) ( ) | o x x Rx d x τ <∞ ∫ Using (2.15,2.16) we may write (2.18e) in the form ( ) exp( 2 ( ) ) o x x R xX =− ∫ y d y . (2.18g) A bordered point is repelled if the eq.( 2.18d) does not have the limited solu tions at this loc ality, means that the above eqs is not integrable. The eqs. (2.18e,f) are the necessary and sufficient co nditions of an existence of the solutions (2.18a-c), which define a set of the states ( ) x x τ τ = , where the macrodynamics arise from stochastics, and determine som e boundary conditions, limitin g the above set. The necessary condition for the punched points to be attractive : , corresponds existence of a regular diffusion process [12], and determines a potential creation of the dynam ics; at () 0 by ≠ () 0 u ay ≠ () by 0 = both the entropy functional and IPF are degenerated: , ; at 3 S →∞ 2 S →∞ () 0 u ay = the process’ dynamics are vanished. Therefore, the fulfillment of (2.18ef) guarantees that eq. (2.18d) is integrated, the pu nched points exist and are attractive, where the dynamics can start. This brings a quantum character of gene ration for both the macrostates and the macrodynamic’s information at the VP fulfillment. A total information, originated by the macrodynamics , is equal to: 1 1 ( ) 1 / 2 ( ( )) ( 2 ( )) ( ) , , 1 , ..., m uT u ii i i i i Sa b a d i τ τ ττ τ τ τ − = Γ Γ= Γ = = ∫ ∪ m τ , (2.19a) where the operator shift and th e diffusion matrix are limited by eqs. (1.18a,b), τ Γ is an union of a total number of i τ time’s instants f or n -dimensional m odel (1.1). Eq. (1.20) also satisfies a stationary condition at a i τ -locality. The identified DP divide the macrotrajectory on a sequence of the extremal segm ents limited by the punched localities, where the model’s randomness and regula ri ties are connected , and therefore the model’s identification is possible. At these po ints, the constra int (2.18b) is applic able for the identifica tion in the form [ 2 () () () ] 0 T X EX X x ττ τ ∂ + ∂ = . (2.19) Writing the equation of extremals u x a = in a traditional form : ,, ( ) x Ax u u Av x A x v =+ = = + , (2.20) where is a control reduced to the stare vector v x , we will identify m atrix A and find the control that solve the initial problem. Substitu ting v 11 ( 2 ) ( ), ( ) [( 2 ) ] , (2 ) TT T T X 1 X bA x v X x v A b bA x x −− − ∂ =+ = + = ∂ (2.21) into (2.19) we get , (2.22) 11 (2 ) 2 { (2 ) ( ) ( ) (2 ) } TT bA E bA x v x u A b −− =− + + 1 − from which at a nonrandom A and [] E bb = , we obtain the eqs for the identifica tion of (2.23) 1 () () () , [ ( ) ( )] , [ ] vv Ab r r E x v v E x x ττ τ − =− = + = 1 / 2 , T b r r = T x + via the above correlation functions. The results, which formaliz e the object’s dynamic macromodel and the synthesized optimal controls , follow from Theorems (T1, T2) below (prov ed in [6]). Theorem 1(T1) . The equations for both functional fields (defined by K and J-H) of the VP are satisfied jointly at a limited set where the following equations for th e macromodel and controls hold true: o Δ 8 u x a = , , , = A , ( =A (t,x)x+u u a u=A (t,x)v A(t,x) 1 (, ( ) ) ( , ( ) ) no KC L R C L R ∈Δ ∩ Δ n t , x ) ( ∈ Δ × R n ),(2.24) 1 \, k m o k ϕϕ τ = Δ= Δ Γ Γ = ∪ , v , (2.25) t 1 ( , ) ( , ( )), on KC V C L R V R ∈Δ ∩ Δ ⊂ n where is the control vector u , reduced to a state vector 1 vA u − = x , with r ank[v]=r ank[x]=n, is a nonsingular macromodel’s matrix; A(t ,x ) ϕ Γ is a “punched” set of a discrete points (DP) 1 k m k τ τ = ∈ ∪ in Δ ; C 1 and KC accordingly are the space of the continuous differentia ble and the piece-wise differentiable on Δ , - dimensional vector-functions. n • This means that the DP divide the macrotrajectory on a sequence of the extremal’s segments, defined by the solutions of macrom odel (2.24), while the controls (2. 25) are applied at a beginning of each segments. These extremals provide a piece-wise approximation of the init ial entropy functional with th e aid of the controls. Theorem 2 (T2). The VP is solved under (1)- the class of the piece-wise controls (2.25) be ing fixed at each segment; (2)-the controls which are switched at the DP 1 k m k τ τ = ∈ ∪ , defined by the conditions of equalization of the dynamic model re lative phase speeds : | dx i / dt ( τ k − o ) x i − 1 ( τ k )|=| dx j / dt ( τ k − o ) x j − 1 ( τ k )|, () 0 , () 0 , ik jk xx ττ ≠ ≠ i , j = 1 , .. ., n ; (2.26) (3)-the controls, which at mom ents (2.26) change the model’s m atrix from ( ) ko AA τ − = − to its renovated form = A + ( ) k A τ (at a subsequent extremal segm ent), while bot h m atrices are identifiable by the followin g relations for the conditional cova riance (correlation) functions: A − =1/2 =1/2 , , 1 rr − − 1 rr − − () , ( ) , k T k rr o r E x x E E τ τ −− − =−= = o − ( ) 2 ( ) k rb k τ τ = , (2.27) A + = ± A − ( 1 + 1 v μ ) = ( 1 − ± 1 + 1 v μ ) 1 − A − , 1 v μ ∈ R 1 , 1 1 v μ ≠ − , or (2.27a) A + = ± 1/2 (1+ 1 rr − − 2 v μ ) = 1/2 (1+ 1 − ± 1 rr − − 2 v μ ) 1 − , 2 v μ ∈ R 1 ; 2 v μ ≠ 1 − ; (2.27b) (4)- the control function 1 ,( ) , ( ) , vk k v v vx v v o x x o I ττ μ −− − − =∠ = − = − ∠ = ≠ 1 1 0 ( ) k , (2.28) which changes matrix to (according to (2.27a), and the con trol function : A − A + 22 2 ,0 , vv v vx I x x μ τ ++ + =∠ ∠ = ≠ = , (2.28a) which changes matrix to (according to (2.27b), or the control function : A + , v 2 ()( ) ( kk Ao A I ττ += ± + ∠ ( ) , k v ) v 1 () v vv xv τ +− −− −= ∠ + = + which changes the above m atrix to 1 ()( ) ( kk Ao A I ττ ) v + =± + ∠ , where coefficients μ v 2 =(0, − 2) satisfy applying the fee dback control, which fulfills A + = , and brings A − − the control functions (2.28,2.28a,b) to the forms v − = − 2 x ( τ - ), v = − 2 o + x ( τ ), (2.29) δ v = v = − 2 + − v − x ( τ ) − , v = − 2 v − − x ( τ - o ), δ v ( o ) = t v δ ,( ) k oo τ = =( , ) ko k τ τ − . (2.30) • Comments .The last equation determines the control jum p (a "needle" control' s t v δ action), which connects the subsequent extremal segments. Controls (2.29,2.30), solving the VP, we call the optimal controls, which start at the beginning of each segmen t, act along the segment, and connect the segm ents in the macrodynamic 9 optimal process. The needle δ -control, acting between the moments ( , ) ko k τ τ − , also performs a decoupling (a“decorrelation”) of the pair correlations at these moments. The reduced control presents a projection of control on each of the state macrocoordina tes, which is consistent with the object’s controllability and identifiability [20, other]. This control specifies the structure of the con trollable drift-vec r u a = A ( u t to x + v ) and the model’s dynamic operator, which is identifiable using the identification equations (2.27, 2.27a) for the correlations functions, or the equati on identifying directly the operator: 1 () () ( 2 ( ) ) o Ab b t d t τ τ ττ − − = ∫ >0 (2.31) by the dispersion matrix b from (2.2a, 2.8b). The control provides also the fulfillm ent of equa lity (2.26), which identifies each following DP. The reduced controls, bu ilt by the m acrostates that are memorized at ( , ) ko k τ τ − , according to (2.19),(2.20), are an important part of the macrosystem ’s structure, pr oviding a m echanism of a se lf-control synthesis. These controls are also applied for a direct programm ing and the process’ prognosis. Let’s illustrate the theorem ’s results, considering alongside with model (2.20) the model of a closed system (with a negative feedback) in the form () () () v t x tA t x t =− 1 (2 ) ( v , where matrix is a subject of both its definition and identifica tion. Using relations ( ) v At ) X bA − =− x and constraint (2.19), we get 1 () () () , [ ] v xx T A br r E x x ττ τ − == . (2.32) B o t h f o r m s f o r () () ( () () ) () u aA x v A x v τ ττ τ τ =+ = − (2.32a) and the identifica tion eq s (2.23), (2.32) for () A τ and ( ) v A τ coincide at () 2() vx τ τ = − , while it is also fulfilled [] , 1 / 2 T x rr E x x b r == = , (2.33) where ( ) rr τ = is a covariation m atrix, determined at the () ~ o ε δτ 1 -locality (connected the micro-and macrostates). Using the equivalent equations: 2, x bX x b == r x − − , we get expression for 1 1/ 2 , X hx h r − =− = . (2. 33a) and T1 0 () 1 / 2 ( ) () X dt x τ τ τ σσ τ − − =− ∫ . (2.33b) A potential, corresponding to the co njugate vector, which satisfies eq.(2.16) at the DP, loses its deterministic dependency on the shift vector (2 .2), becoming the function of diffusion and a state vector at the DP vicinity (2.33b). The gradient in (2.33b) depends only on the diffusion: T1 0 () () 1 / 2 ( ) 2 () () () T X gradX dt X X x τ τ τ τ σσ τ τ τ − − ∂ == − = − ∂ ∫ (2.33c) and at the vicinity’s border, where , it acquires a form of the T 0 σσ → δ -function. Out of the DP, the gradient (2.33c) does not exist as we ll as the potential function in the form (2.33b) . The kinetic form for the conjugate vector still satisfies (2.18), where the kinetic operator is determined by its m acroscopic value in (2.18c). Thus, the equalities (2.18a,b,c), (2.26), and (2.33a,b,c) (following from (2.12)) define a set of states ( ), ( ) X x τ τ on the extremal trajectory, which are used for an access to the rand om process, specifically by forming the control functions (2.29,2. 30) (where the con trols are a part of the shift vecto r in (2.2)) and the operator identification (2.31, 2.32). (For example, from (2.18c, 2.33) follows 10 1 () 2 () () () u xr r a τ ττ − = τ T . (2.33d) From other consideration, using (2.3 5),(2.18b), and (2.15,2.16), we get di rect connection the shift vector and diffusion in the form 1 0 () ( () ) () ( 2( ) ) () uu T aa b b t d t b τ τ τ ττ τ − − = ∫ . (2.33e) (The last relation coincides with (2.32, 2.32a) at ( ) ( ) ( ) T x xr τ ττ = ). For 1 () () () () () () u aA x b r x τ ττ τ ττ − =− = , function 1 ( ) exp{ ( ) } , ( ) 0 o x x Rx r yy d y y x τ − = <∞ = ≠ ∫ () if ,which is satisfied for a regular d iffusion pro cess. Substituting in (2.33c), (2.19) relations () 0 ≠ () , () 1 / 2 () , ( ) dt r b r A ry 0 τ τ T1 () b r σ στ τ τ τ == () 2 [ () () ] T AE x X τ − = − τ − ∫ , we get τ ττ =− ,where [ ( ) Ex X ( )] [ ik Ex ττ ( ) ( )] 2 [ ] 2 [ ], i ii i S X EH E t τ τ ∂ == = − ∂ (2.33f) following from [ () () ] [ () () ( 1 / 2 ) () () ] 1 / 2 () , [ () () () ] 1 ik i i i k k i i i k k i k Ex X E x h x Ex x h τ τλ τ τ τ τ λ τ τ τ τ =− = − [ () () ] [ () () ( 1 / 2 ) () () ] 1 / 2 () , [ () () () ] 1 ii i i i i i i i i i i i i Ex X E x h x Ex x h = , τ τλ τ τ τ τ λ τ τ τ τ =− = − = . We come to 3 2 () 4 [ () ] , i i ii i SS S S E tt t λτ τ ∂∂ ∂ t ∂ == ∂∂ ∂ ∂ = , (2.33g) which establishes the eigenvalue’s connection to the above local different ial entropy, taken at t τ = along the IPF for each i -model’s dimension ( i =1,… n ). The above math expectation brings an average differential entropy for each dimension. If each i -dimension contains k extrem al segments, then ij λ indicates the i -th eigenvalue of j -th segment and () 4 [ () ] ij ij j S E t λ ττ ∂ = ∂ presents the math expectation for each k -th segm ent at its t τ = . The differential entropy’s sum for all k-segments: 11 [ () ] () () kk ij ji j jj S ET t r A τ λτ τ == ∂ −= = ∂ ∑∑ equals to 3 [( ) S E t ] τ ∂ ∂ . Applying the optimal controls (2.29, 2.30) to the invariant rela tion (2.18d)(right) brings the fllowing invariants: () ik k ti n v λ τ Δ= , ( ) ik k inv λ ττ = ; 1 () ik k inv λ ττ + = , (2.34) where 1 kk t k τ τ + Δ= − is a time interval between DP: 12 1 ( , , ..., , , ...) kk τ ττ τ τ − = ,( ) ik λ τ is the eigenvalue taken at the moment k τ . Because the constraint (2.19) acts at each DP moment 12 ( , , 1 ..., , , ...) kk τ ττ τ τ − = for a sequence of the extremal segments, th e contro ls at the nearest moments: 11 2( ) k x ( ) vx 2( ) , ( ) kk v k τ ττ τ −− = −= − , where 1 kk t k τ τ − =+ Δ . And after applying the last of the control to eq. (2.23) and substituting its solution at to (2.32a), we get the connecti on of the above matrices at any k t Δ 1 , kk tt − Δ Δ : 1 11 1 ( ) ( ) exp[( ( ) ]{ 2 e xp[( ( ) ]} v kk k k k k At A A A t ττ δ τ τ − −− − Δ= − Δ , (2.35) where 1 ( k A ) τ − , being identified at the moment 1 k τ − , also determines k t Δ from (2.34, 2.31). The identification at the m oment k τ brings the new or renovated ( ) k A τ ,( ) v k A τ , and so on. In the procedure of the matrix identification [19] , interval is used for the matrix’s com putation from the data had o btained k t Δ 11 at each DP 1 k τ − . The above discrete control, applied at beginning of k t Δ proceeds during this interval, while at a mo ment k δ τ between the segments th e needle contro l δ v (2.30) is applied, which connects the extremal’s segm ents. From the variation eqs. (2.18) , (2.17) f ollow that on the extremals holds true the condition 3 min [ ( )] m in ( ) S EH t τ τ −= ∂ ∂ . Applying the last one in the form and substituting in it (2.20, 2.21, 2.23, 233e) at the above optimal control, we come to the condition min [ ( )] EH t 3 min [ ( )] max [ ] mi n [ ( )], ( ) ), S E H E Tr A o SignA o S t ττ τ ∂ == − − + ∂ ( ignA = o τ (2.36) which connects the the identified matr ix’s elements to the initial variat ion conditions (2.12) and the above eqs. 3. The IPF results for a joint soluti on of the optimal control, identi fication, and the consolidation’s problems Solution of this problems we consider for a system, observed discretely at the moments of applying the optimal control: τ ∈ { τ k }, k , and transformed by this contr ol to the term inal state = 1 , .. ., m x T =0. Let us apply a transformation G to the model (2.24), transf orm ing it to a diagonal form: 1 /( z v =+ ) , , dz dt A G AG G − = A = () ij G L ∈ ( ), detG R n ≠ 0, ∀ t , = G z x , v = G v , (3.1) o ∈ Δ x T = ,1 = () ij n ij o = O ⇔ z T = , ,1 () ij n ij oO = = 2() , vz τ =− ,1 ) ij i j (, (( (, j v i v AA I zt ) ) τ n δ • • = = += 1 (( ) ) n ii t λ = , (3.2) where the piece-wise m atrices A , A are fixed within the interval s of the contro l discretization , and are identifiable at each of these intervals, while the m atrices eigenvalues (3.2) are connected according to relations (2.26); , 1 , ..., k tk = 1 m − I is identity matrix. Theorem 3.1 (T3.1). Transferring the system (3.1) to an origin of its coordinate syst em by the optimal contr ols, applied at the time intervals , requires the existence of a minimum of two matrix’ s , k tk = 1 , ..., m 1 () vk ii A λ = = n − eigenvalues, which at each of these moments satisfy the condition of connecting these in tervals in the forms (3.3) | | | 1 , ..., 1 kk ij k m λλ = | 1 , ..., , n , , ij == with the number of the control di screte intervals equal to n . Proof . By applying (2.26) to (3.1), using th e matrix function (2.35) under the control 1 2( ) k vz τ − =− , we come to the recurrent relations connecting the nearest λ i k , λ i k − 1 : 11 1 )(2 exp( )) kk k k i i ik ik t λλ λ λ −− − =− − 1 t − k . (3.4) exp( exp( =− Then solutions of (3.2) acquire the form 11 1 ( ) (2 ) ( ) kk ik i ki k zt tz t λ −− − . (3.5) By writing the solution on the last control’s discrete inte rval m tT = : 1 11 ( , ) ( xp( ) ( ) 0, ( ) 0, 1 , ..., , m ii i m i m zT T z t z t i n λ − • −− = ≠ = = (3.6) 2 e =− 1 m t − we get the relation, defining T through a preceding eigenvalue, which sa tisfies to all previous equalizations: T= +ln2/| λ i m − 1 |, λ 1 m − 1 >0, λ 1 m − 1 = λ 2 m − 1 ....= λ n m − 1 >0. (3.7) 12 The positivity of the above eigenvalues can be reached at applying the ne edle controls in addition to the above step-wise controls. If these controls are no t added, more general conditions b elow are used. The equalizations of the eigenvalues at other discrete intervals, leads to the ch ain of the equalities for n : ≥ m | λ 1 m |=| − 1 λ 2 |=....=| m − 1 λ n m − 1 | (3.8) | λ 1 m − 2 |=| λ 2 m − 2 |=.....=| λ n − 1 m − 2 |,…, | λ 1 m − i − 1 |=| λ 2 m − i − 1 |=....=| λ n − i m − i − 1 |,…, | λ 1 1 |=| λ 2 1 |=.....=| λ n | , (3.8a) − m + 2 1 and for m leads to the following chain of the equalities: ≥ n | λ 1 m − 1 |=| λ 2 m − 1 |....=| λ n m − 1 |,(3.9) | λ 1 m − 2 |=| λ 2 m − 2 |=.....=| λ n − 1 m − 2 |,…, | λ 1 m − i − 1 |=| λ 2 m − i − 1 |=....=| λ n − i m − i − 1 |,…, | λ 1 m − n + 1 |=| λ 2 m − n + 1 |. (3.9a) The system of equations (3.8 ), (3.9) defines the sought ( m -1) moments of the cont rols discre tization. In a particular, from equation (3.8) the relation (3.8a) follows, which is inconsistent with the con dition of a pair-wise equalization of the eigenvalues (3.3) at n>m . The system (3.9) is a well defined, it agrees with (3.1),(3.2 ) and coincides with (3.8) if the number of its equations equals to the num ber of the equation state’s variables. Thus, equations (3.7), (3.8), (3.9 ) have a sense only when n=m . The n -dimensional process requires n discrete contro ls applied at ( n-1) intervals, defined by (3.8), (3.3) at the given starting conditions for equations (3.2). • Remark. In the case of the matrix’ renovation, each follo wing solution (3.5) begi ns with a renovated eigenvalue, forming the chain (3.8), (3.9). Theorem 3.2 (T3.2) . The fulfillment of conditions (3.3) leads to an indistin ctness in time of the corresponding transformed state’s variables: ˆ ˆ ˆˆ , ˆ ii ij i j j j zz zz G zz ⎛⎞ ⎛⎞ == ⎜⎟ ⎜⎟ ⎝⎠ ⎝⎠ ; = ˆ ij G cos , si n sin , cos ij ij ij ij ϕ ϕ ϕϕ − ⎛⎞ ⎜⎟ ⎜⎟ ⎝⎠ , ij ϕ = arctg ( () () () () j ki k j ki k zz zz τ τ τ τ − + ) ,0 , 1 , 2 . NN . . π ± = (3.10) in some coordinate syste m, built on the states and rotated on angle 1 (0 ... ) n zz ij ϕ in (3.10). To prove we consider the geometrical meaning of the conditio n of equalizing of the eige nvalues as a result of the solutions of the equations (3.1), (3.2). Applying relations (3.3) to the so lutions of (3.8) for a nearest , i , ij ≠ j , we get dz i z i dt = dz j z j dt ; z j ( t , • )= z j ( τ k .) z i ( τ k .) z i ( t , • ), i , j = 1, . . . , n , k = 1, . . . , ( n − 1) , (3.10a) where the last equality d efines a hyper plane, being in a parallel to the ax is 0, 0 ij zz = = in coordinate system . By rotating this coordinate system w ith respect to that axis, it is found a coordinate system where the equations (3.10a) are tr ansform ed into the equalities f or the state variables in form (3.10). The corresponding angle of rotation of coordinate plane is determined by relation (3.1 0). 1 (0 ... ) n zz ˆ i z (0 ) ij zz 13 Due to the arbitrariness of k = 1, . . . , ( n − 1) , i , j = 1, . . . , n the foregoing holds true al so for any two o of the state vector and for each interval of discretization. By carrying out the sequen ce of such ( n comp nents − 1) rotations, we com e to the system 1 ˆˆ (0 ... ) n zz , wh e l state variables are indistin guishable in time. er al the • Comments. If a set of the discrete moments ( τ k 1 , τ k i , τ k N k ) exists (for each optimal control v k ) then a unique solution of the optimization problem is reached by choosing a m inimal interval τ k i for each v k , which accomplishes the transformation of the above system to the origin of coordin te yst a minimal time. The macrovariables are derived as a result of memorizing of the states a s m e during z i ( τ k ), i , k = 1, . . . , n , being an d co attribute of the applie l in (3.2), w ch are fixed along the extremal segments. The transformation ˆ () GG × transfers { ntro h ij i x i } to new macrovariables { ˆ i z }, whose pair-wise indistinctness at the successive moments { τ k } agrees with the reduction of number s of independent m ompanied by memori acrocoordinates. zation of This reduction has been referred as the states' consolidation . The successive equalization of the relative phase speed in (3.10a), acc z ( τ k i ), s bo zation bjects llows ete interva The model dynamics determines an essence of the m echanism of the states’ ordering [4,8]. Therefore, the problem of forming a sequentially consolidated macrom odel is solved in a real–time process of the optimal motion, combined with identificati on of the renovated operator. Wherea th equa ls [4]. li and cooperation follow from the solution of the optimal problem for the path functional. The macromodel is re versible within the d iscrete interv als and is irreversible ou t of them. Thus, a g eneral structure of the initial object (1.1)(used also in physics), allows modeling a wi de class of complex o with superimposing processes, described by the equations of irreversi ble thermodynamics ([2], sec.7). According to the extremal properties of the informati on entropy, the segm ents of the extremals approxim ate the stochastic process with a maximal probability, i.e., without losing inform ation a bout it. This also a us to get the optimal and nonlinear filtration of the stochastic process with in the discr is initiated by applying a starting step-wise control in the form ( [ ( ) ] o ) 2 o o E τ =− , o t v x s τ (3.11) at o o so τ =+ , where o o τ is t trol’s s () t x s r he moment of the con tarts, are the object’ s initial conditions, which also include given correlations ( ) [( )( ) ] T tt rs E x sx s = and/or () 1 / 2 bs () s . = T determine a starting external contro l hese initial conditions also () ) o o o o u 1 () () ( o o o o b r v ττ − = τ τ , (3.12) where () 2 o o v () o o x τ =− τ , and a nonrandom state can be defined via 1/ 2 () | () | oo oo xr τ τ ≅ . (3.13) This control imposes the constraint (2.18a) in the form 1 , ..., (2.18d) that allows starting the dynami he above initial conditio c process. T ns identify 1 ( ) ( ) ( ) , ( ) ( ( ) ), oo o o o oo o o i o A br A i ττ τ τ λ τ − == (3.14) which is used to find a first time in n = 1 11 o o terval t τ τ =− movement, which c between the punched localities, where the next m onal’ and diffusion, identified during the opt urrently correct this goal. atrix’ s controllable shift s elements is identified, and so on. Specific of the considered optimal process consists of the computation of each following time interval (where the identification of the object’s operator will take place and the next optimal control is applied) during the optimal movement under the current optimal control, formed by a simple f unction of dynamic states. In this optimal dual strategy, the IPF optimum predicts each extremal’s segments movement not only in terms of a total functional path goal, but also by setting at each following segment the renovated values of this functi imal 14 4. The consolidation of the model’s processes in a cooperative informat ion network (IN). The IN code Conventional information science, c onsidering generally an information process , traditionally uses the probability measure for the random states and corresponding Shannon’s entropy measure as the uncertainty’s function of the states [21, 16,15, other]. The entropy functional defines the conditional qua ntity of information for the compared stochastic processes t x , 1 t x , and the IPF allows building a dynamic information network for the corresponding macroprocess es . The fulfillment of condition (2.26) conn ects the extremal segm ents of a mu lti-dim ensional process leading to the segment’s cooperation , while the realization of condition (3.10) reduces a number of the model’s independent states carrying a state’s consolidation . Both these specifics allow grouping the cooperative macroparameters and aggregating of their equival ent di mensions in an ordered hierarchical information network (IN), built on a multi-dim ensional spectrum of the system ’s operator, which is iden tified during the optimal motion. The IN organization includes the following steps: arrang ing the extremal segm ents in an ordered sequence; finding an optimal m echanism of connecting the arrange d se gments into a sequence of their consolida ting states, whose joint dimensions woul d be sequentially deducted from th e initia l model’s dimension; and forming an hierarchy of the adjoin ing cooperating dim ensions. Below we consider the form al relations and procedure implementing these steps, which are based on the variation’s and invariant conditions following from the initial VP (sec.2). W e illustrate thes e relations using the n -dim ensional spectrum of the com plex eigenvalues io io io j λ αβ =± / io for the model’s starting m atrix , which we assume all different with the ratio ( ) o At io io γ βα 0 io α ≠ (, ) kk = , , . The segments’ cooperation produc es a chain of the m atrix’s eigenvalues 1 , ....., i = ( , ) kk i t i t n A tt o += o λλ + with kk it it j k it λ α =± β and () ( ) ( kk k it it it oo j λα β ) o + =+ ± + 1 , ....., kN at each segment’s end and a beginning of a following segment accordingly ; = is the number of DP( k to + ) where cooperation of k it λ and takes place. k it o λ + A feasible IN joins of the multiple nodes, while each its node collects a group of the equal eigen values gained in the cooperative process. The optimal condition (2.12, 2.18, 2.36) for such groups of the eigenvalues, considered at a moment of cooperation , acquires the form k to + 1 min [ ( )] mi n[ ] m k it k r r r Tr t o g g λ λ = += ∑ , (4.1) where is a -th group with it’s a joint eigenvalue r g r g r λ , m is a total number of gr oups-the IN’s nodes. Cooperation of the corresponding states’s variables is carried by applying transf ormation (2.10) to the related g r λ . For realization of (4.1) we apply the invariant (2.34), which is concretized in a form () io it io io it io io tt t t i n v λ λλ −= − = , (4.2) where io λ is fixed at each moment of the segment’s beginning leading to io t io io ti n v λ = and io it o ti n v λ = . (4.2a) By the interval’s end , the eigenvalue io t Δ it i tt = ( ) ii i t t λ λ = satisfies the following solution at the app lied control: 1 ( ) exp( )(2 exp( ) , 1 , .., , ..., i i it io io io io io tt t i λλ λ λ λ − == Δ − Δ = k n . (4.3) Substituting invariants (4.2) , (4.2a) in (4.3) we get 1 ( ) (exp) (exp) it it io it o t t inv inv inv inv λ λ = == . (4.3a) 15 Applying (4.2,4.2a) to the initial model’s complex eigenvalues, we com e to the local invariants io i t α inv = =a o , io i ti n v β = =b o , ii ti n v α = =a, ii ti n v β = = b, ii t tt = , 1 , ..., in = , (4.4) with io α , i α and io β , i β representing the real and imaginary info rm ation speeds (accor ding to relations (2.14,2.14a)), while the invariants a , b o measure the quantity of real and imaginary information, produced during the interval by its end; invarian ts a , b m easure the quantity of real and imaginary information, produced at the ending moment the interval, prior of the segment’s cooperation. o Both invariants are used constructiv ely in building the coo perating chai n of the eigenvalues satisfying the condition (2.26) in the forms: |( ) | | ( ) ik j k tt λλ =+ | o , (4.5) for each cooperating segments, whose eigenvalues satisfy the solutions (4.3). A successive application of (4.3, 4.5) br ings a cooperation of the extremals eigenvalues’ segments at each D P. Solving jointly (4.3, 4.5) with the invariant relations (4.4), we find the eqs for the inva riants at the DP: , ik 2(sin( ) cos( )) exp( ) 0 oo o γ γγ γ +− aa a = , io io io β γ α = , 0 io α ≠ 0 io β ≠ , at Im ( ) 0 jk t λ = , (4.6) Using relation a 2 and the representation (4.3) via the invariants, w e get these invar iants’ connection by 2 |( ) | t ii i tt λ = a = a o exp ( − a o )( 1 − γ 2 ) 1/ 2 (4 − 4 exp ( − a o ) cos( γ a o ) + exp ( − 2 a o )) − 1/ 2 . (4.7) This allows us to evaluate both inva riants. From the solution of (4.6) at 1 γ → we get a minimal (1 ) o γ 0 = → a , which brings also the minimal a ( 1 γ = )=0 from (4.7). The first one at 0 γ → limits a maxim al quantity of a real information produced at each segment; the second one at 1 γ → restricts to a minim um the information contribution necessary for cooperation and, therefore, puts a limit on the information cooperation. It’s also seen that relation (4.6) as the function of γ reaches its extreme at γ =0, which at |( , brings 0 ) | 0 o γ == a . 7 6 8 ( 0) 0. 23193 γ =≅ a . Actually, a feasible admissible diapason of io γ γ = , following from the model simulation [33], is 0.00718 0.8 io γ ≤≤ with the condition of a model equilibrium at 0.5 , γ = (0 . 5 ) l n o 2 γ = ≅ a , (0 . 5 ) 0 . 2 5 γ = ≅ a . The cooperation of the real eigenvalues, according to (4.7b), reduces the condition (4. 1) to the form 11 min[ ] mi n[ ] RR g g rr r r rr gg λ α == = ∑∑ , (4.8) where is a joint real eigenvalue for each group, sa tisfying the requirement of positive eigenvalue 0 g r α > g r α at applying the optimal control (2.29,2.30). A num ber of the joint eige nvalues in a group we find starting with a doublet as a minimal coopera tive unit (fig.1). The m inimal r g g r α for the doublet with two starting real eigenvalues at | | | | io ko α α < can be reached, if by the moment when at i t it α =a / , the initial eigenva lue o i t ko α brings the eq. (4.4) to the form , 1 )] t − − ( ) exp( ( ))[2 ( ) ( ) k i ko ko i ko o i ko ko k o tt αα α α α =− = e t xp( k − , t ko t α whose solution ( ) ki t α will coincide with it α by the end of the duration, and i t 2 g ri t α α = . The fulfillment of | | / | | it io α α = | a / a | at o (0. 0 0.8 ) γ = − , | a| <| a |guarantees the decreasing of both o it α and () ki t α , fulfilling the inequalitie s || | it io | α α < , |( ) | | ki k o t | α α < . (4.9) 16 Let’s consider also a triplet as an elementary group of the cooperati ng three segments with the initial eigenvalues || | | | | j oi o k o α αα << , where the minimal eigenvalue j o α of a third segment we add to the previous doublet (for a convenience of the co mparison)(fig.1). Then the minimal g r α can be reached (at other equal conditions) if the moment for joining of the first two eigenvalues (with the initials | | | | io ko α α < ) coincides with the moment j t of forming the minimal jt α = | a o |/ j t for the third eigenvalue. Then the additional discrete interval is not required. Compared with the related doublet, we have | j t α |<| it α |, where each minimal eigenvalue is limited by a given ranged initial spectrum . Therefore, a minim al optimal c ooperative group is a triplet with g r α =3 j t α . For a space distributed macromodel [8], a number of cooperating segm ents is three, each one for every space dimension. This limits also a m aximal num ber of the cooperating segments by thr ee in each dimension for every elementary cooperative unit. The selection of the triplet’ s sequence and their arrange m ent into the IN is possible after ranging the initial in their decreas ing values: 1 () n io i α = 12 1 | | | | , .... | | | |, ...... | | oo i o i o n o α αα α α + >> > > . (4.9a) Applying the needle controls at the mome nt of cooperation (for example at ( ) for the doublet) takes place when, in addition to th e execution of (4.4) in the f orm | i to + ( k i t ( ) | | ) | ii to o α α + =+ )| k t α , and the reaching a minimum among the sum of the egenvalues prior the cooperation: ), (4.10) | ( )| | ( )| 2| ( )| | | m i n ( | ( )| | ( g ii k i ii i ii k to to to t αα α α α + + += += = + the cooperated sum satisfies also a m aximum condition regarding any sum of the following two eigenvalues: . (4.11) 11 2 2 2| ( )| | | m a x [ | ( )| | ( )| ] g ki i i i i i to t t αα α α ++ + + += = + Because for the ranged conditions (4.10,4.11) is satisfied, ther e are also fulfilled the relations 1 () n io i α = 1, | | io 1, 2 , (| | | |) max[| | ] io i o i o α αα α −+ += + ] + , as well as 1 || m a x [ | | io i o α α + = . The formalization of this procedure leads to a minima x representation of eigenvalues by the Kurant-F isher's variation theorem [22], which brings the condition of a sequential ranging for the macromodel eiegenvalues' spectrum. The result follows from a successive applicati on of the maximum condition to the minimal condition for the Relay's ratio q ( x ) = ( x , Ax ) ( x , x ) for a macromodel's m atrix A > 0 , which leads to 1 (, ) | | | | , .... (, ) t i xA x xx λλ + ≥≥ i 1 , or in our case to | 1 || | | | gg g ii i α αα − + >> . The geometrical meaning illustrates an ellipsoid, whose axes represent the model's eigenvalu es. The method starts with the maximal eigenvalue , taken from a m aximal axis of the ellipsoid’s orthogonal cross section, that is rotating up to reaching a minim al axis of the ellipsoid’s cross s ection, which should coincide with the following lesser eigenvalue 11 |( ) | m a x ( ) x tq α = 22 | ( )| t x t 1 1 | ( )| α α < , and so on. Because the model works with ranging the current | ( ) | ii t α , the procedure brings also a m onotonous sequence of the starti ng eigenvalues in (4.9a). There are two options in form ing the IN: (1)-identify the IN by co llecti ng the current number of equal g r α for each cooperative group -as an IN node, and then arranging thes e nodes into a whole IN; r g (2)-building an optim al IN by collect ing the triplet’s 3 g r 3 r α α = = and using invariant rela tions (4.4,4.6a, 4.7). 17 In both cases, the current eigenvalu es are identified under the action of the applied control by relations 1 2 ii i rr λ − = , where , are the covariation functions, and 2 () [ () ] ii rt E x t = 1 , ..., i = n ˜ x i ( t ) is the observed microlevel's process. These eigenvalues allow us also determine the invarian ts (4.4), calculate 3 r α , and the triplet’s numb er for the optim al IN with n initial eigenvalues. The sequential cooperation of the range d eigenvalues by threes, leads to the repeating of the in itial triplet’s cooperative process for each followi ng triplet with the preservation of two basic eigenvalues ratios 1 1 21 oi o o oi α α α γ αα + =→ , 1, 2 2 32 io o oi α α α γ αα + + =→ , o satisfying the equations 1 α γ = 12 2 exp( ( ) ) 0.5 exp( ( )) exp( ( ) ) 0.5 exp( ( )) αα α γ γγ γ γγ γ − − aa aa , 1 2 11 1 1 2( ) ( 1 ) α α αα γ γ γγ γ − =+ − − a , (4.12) where parameter γ is found from relation (4.7) via known the inva riants (4.4), which are the common for the optimal m odel as well as the γ is. The system of equations (4.3-4.12) allows the restoration of the model’s m acrodynamics () ( , , , () ) t ii o i i x tF x t t τλ = by knowing the initial io λ , io x , and finding i τ via the invariants, which also determine γ 1 α , γ and as a result the stru cture of optimal IN. The implementation of the above equa tions leads to a creatio n of the successively integrated information macrostructures that acc ompany the in creasing of the intervals’ sequence i τ , 1 , ..., iN = and decreasing of the consolidated real eigenvalue 3 r α = a / , . The sequence of the cooperating ordered eigenvalues o r t 3 , 5, 7 , ... r = n 3 r α , moves to its m inimal 3, 5 , 7 , . . r = m 3 m α with the IN minimal dim ension for a final node . The optimal IN’s triple t structure includes the doublet as a primary element with adding a third eigenvalue to the first doublet, and then addi ng to each consolidated 1 mo n → 3 r α the following doublet. The considered sequence of the triplet’s optimal processes, tran sfers the consolidating 3 r α on the switching control line ( ), at which the minim al | 3 mm t α = i n = a v 3 | m α for each m -node will be achieved at th e node’s cooperative m oment . This strategy is executed for the spectrum of the initial eigenvalues, defined by the m ultiplicators (4.12) with th e maxim al m t γ n α 0 =(2.21) (3 , . 89 ) n /2 γ = 0. 5 . (4.13) Within the distance betw een the eigenvalues’ spe ctrum, determ ined by above 12 , α α γ γ , the model represents an optimal-minimal filter . The values ( ) io α that are different from the optim al set 1, io α + = 1 1 (0.2567 ) ( 0.4514) i o i α − , 1 o α = α max , γ = 0. 5 (4.14) are filtered and do not affect th e IN peculiarities in the pract ical im plementation. At known 1 o α , and a given ( , n γ ), can be found the spectrum of the initial eigenvalues including no α , the invariants, 12 , α α γ γ , and the IN’s structure is build for the optimal macrom odel without using the microlevel [8]. The triplet’s genetic code. The m odel possesses two scales of time: a reversible tim e that equals to the summary of the tim e inte rvals on the extrem als 1 in r i i T = = = t ∑ , and the irreversible lif e-time that is counted by the summary of the irreversible tim e intervals ir e T δ ( t i ) between the extremals window’s reversible time intervals. Proposition 4.1. The ratio of the above elementary tim e intervals is evaluated by formula − 2 (1 ) ii io tS t δ δ Δ =− a , (4.15 ) 18 where i S δ Δ is an information contribution delivered d uring the reversib le time inte rval , -invar iant. i t o a Proof. Because the needle control connects the extrem al segments by transferring info rmation between the segment’s window δ ( t i ) , i.e. from the i -segm ent’s information to the ( io )-segment’s inform ation / i St δ Δ i + /( ) ii i St t δ δ Δ+ , the information contrib ution from the needle co ntrol o i δ α , delivered during δ ( t i ) i s / ii St δ Δ − /( ) ii i St t δ δ Δ+ = 2 o ii i ii i St tt t δ δ δ α δ Δ = + . (4.16) From other consideration, o i δ α is evaluated by an increment of information production at δ ( t i ): 2 2 o io io ii SS t tt δδ ∂∂ δ αδ δ ∂∂ ΔΔ = , where 2 2 io S t δ ∂ ∂ Δ = 2 2 0 lim ( ) , 1 / 2 i t ii o ii o i t SH tH tt δ ∂∂ δ α ∂∂ → Δ =− = , , (4.17) , = , 22 11 1 2( ) exp( )( 2 e xp( ) tt t t i i ii ii tt αα α α − −− − =− − 2 1 0 lim 2( ) i tt ii t αα − → =− io H 2 1 () t i α − − o i δ α = 2 1 () t i t . α δ − (4.17a) By substituting (4.17) into (4.16), at o a () γ = , we get (4.15). 1 t ii t α − • Proposition 4.2 (1) - Each extremal segm ent's interval retains i t ( ) o γ a units of the information entropy; (2)-The regular control brings negentropy a ( γ )= t ii t α for interval i t where the con trol is me mori zed at each of the segment’s locality . , Proofs of (1),(2) follow from the invariant relatio ns (4.4) and an essence of the contro l’s actions. • Corollary 4.1. By evaluating the info rmation contribu tion on the -extremal by both the segment entropy’s invariant a o and the regular control’ s negentropy invariant a , we come to i t i S δ Δ = a , and o a − 2 2 a- a - a a io io t t δ = o = * () δ γ • (4.17b) Corollary 4.2. The model’s life-tim e ratio is evaluated by the invariant ratio * ir T / ir r TT = * () δ γ at -extrem al: = i t * ir T 2 2 a- a - a a oo o = * () δ γ (4.18) Indeed. Using 2 2 a- a - a a oo ii o tt δ = , 1 n ir i i Tt δ = = ∑ and 1 in r i i T = = = t ∑ , we come to = * ir T 2 2 a- a - a a oo o . • Comments 4.1 . Let’s count * () δ γ at γ ∈ (0, 1) and 1 in r i i Tt = = = ∑ 1 2 n t − . Then the * () δ γ -function takes the values from 0.0908, at γ =0.1, to 0.848, at γ =1, with a minimal value 0.089179639, at γ =0.5. At n =22, =2642, we have 471.225, at 1 n t − ir e T = γ =0.5, with a maximal ir e T = 4480.832. • Corollary 4.3. A min imal 0 i i t t δ → leads to equality o a () γ − a ( γ ) a − 2 o () γ 0 , (4.19) which is approximated with an accura cy * δ a =0.044465455, 2 o ( 0.5) γ = . • (4.19a) Corollary 4.4. Because each extremal segment's interval retains i t o a () γ units of the infor mation entropy, and the regular control brings negentropy a ( γ )= t ii t α for the interval , while a needle control is a lso applied i t 19 on this interval, the fulfillment of eq (4.19) means th at th e information contribution, delivered by need le control for interval , is evaluated by invariant a i t 2 o () . γ Therefore (4.19) expresses the balan ce of information at each -extremal at condition i t 0 i i t t δ → , which at 0.5 γ = is approximated with the accu racy (4.19a). • Comments 4.2. Since the needle control joins the extremal segm ents by delivering info rmation a 2 o () γ , we migh t ass ume t hat * δ a represents a defect of the a information, which is seal ed after cooperation. Taking this into account, lead s to a precise fulfillm ent of the balance equation in the form 2 o 2 o a ( γ ) a − 2 o o a () γ () γ − * δ a 2 o () γ =0, (4.19b) − a encloses the inform ation spent on the segments’ cooperation. 2 o • * δ while Proposition 4.3 . The information structure of a tr iplet. A triplet, formed by the three-segments cooperative dynamics during a minimal time, encloses inform ation 4 a 2 o +3 a 4 bits at , while each of the IN ’s triplet’s node conceals information a + a 1 bit. 2 o 0.5 γ = 2 o + Proof. The triplet’s dynamics include two extremals, joining first into a do ublet, which then cooperates with a third extremal segment (fig.1a). Form ing the trip let during a minim al time requires building the doublet during the time interval of a third extrem al segm ent, while all three dy namic processes star t simultaneously with applying three starting cont rols. Each two extrem als consist of two discrete intervals ( ) where is the triplet’s number, are the first and second discrete intervals of the first dynamic process, are the first and second discrete inte rvals of the second dynamic process, is a single discrete interval of the third dynamic process. The above re quirement for a triplet w ith a minim al process’ time implements the following equations on the first disc rete interval for the fi rst and second dynam ics: 11, 12 , 22 , ii i tt t 21, i t i 21 t 1 αα −+ 11 12 , ii tt 22 , ii t 3 i t 11 1 1 1 1 a io i i i tt = − o a + a + a 2 o − *2 a o δ , 20 ii 21 21 i 21 i tt + 2 o a − o a + a + a 2 o − *2 a o δ , (4.20a) α α −+ where 11 1 io i t α − = , − o a 20 21 ii t α − = , − o a 21 21 ii t α = a , 12 i 12 i t α = a . This means that at each of these dis crete intervals, the information balance is fu lfilled with the accuracy * δ a . The first and second dynamics, at the second tim e interval, convey the summary contribution 2 o 13 13 ii t α + 23 i 23 i t α +2 *2 a o δ , followed by applying the needle control, which joins both dynamics into the doublet. Th is brings a balance condition in the form 13 13 ii t α + 23 23 ii t *2 a o +2 α δ = a . ( W e count here the informati on contribution from a defect 2 2 o *2 a o δ = * () δ γ at both intervals ). Joining the third segment’s discrete interval with the doub let at the IN node requires applying another needle cont rol, acting at the end of third in terval (fig.1a). 21 , i t 22 i t This leads to the balance equation for th ird discrete interval in the f orm *2 a o δ − o a + a + a 0, at 2 o 30 31 31 31 ii i i tt α α 2 o − −+ + a 0.5 γ = . (4.20b) It seen that a tota l information, de livered to the triplet, is equal 4 a +3 a , w hich compensates for the information being spent on the triplet’s cooperative dynamics: 2 o 3 + o a * a 2 o 12 ii 12 22 i tt 22 i α α + 31 31 ii t α + 13 13 ii t α + 23 23 ii t α +2 δ . (4.20c) + Let’s verify this result by a direct com putation of the contributions 13 13 ii t α and 23 23 ii t α , using the following formulas for each of them: 1 13 13 12 3 1 12 3 11 e x p ( () ) [ 2 e x i i ii ii i tt t t t t α 1 () i 12 p α 3 ii t 11 () i ] αα − =− − − 1 23 23 12 3 12 3 21 ( exp( ( ))[2 exp( )] i i ii ii i tt t t t t α − 21 i t 21 ) i 12 3 ( ii α αα − =− − − − , where 12 3 11 12 3 11 () ( / ii i i ii t t 11 i t 1 ) tt α α −= − = a 13 (1 ) γ − , 22 22 21 3 21 ( / i i i i t t t 3 ii 21 () i tt 1 ) α α =− 23 ( =a 1 ) γ − , (4.20d) − 20 whose parameters at γ =0.5 takes the values 13 γ 3.9, 23 γ 2.215, a 0.252. The computation shows that the reqular controls, acting at and , deliver information a 13 t 23 t 13 (1 ) γ − =0.7708 and a 23 (1 ) γ − =0.306 accordingly, while the macrodynam ic process at these intervals consu mes 13 13 ii t α 0.232 and 23 23 ii t α 0.1797. Including defect * () δ γ , we get the information difference 0.50088 a (at 2 o 0.5 γ = ). This means that both regular controls , acting on the second d oublet’s intervals, provide necessa ry information to produce the needle control, and therefore, the doublet sa tisfies the balance equation that does not need additional external information for the cooperation. The doublet’s cooperation with the third extremal segment forms the triplet’s IN node, which encloses th e inform ation contribution from both the doublet’s and the third segment’s needle contro ls (4.20b), providing the defect *2 a o δ that satisfies the balance in (4.20b). The triplet’s inform ation at 4 a +3 a 2.75 (at 2 o 0.5 γ = ) is measured in Nats (according to the basic formula for entropy[1]), or by 3.96 4 bits in terms of l measure. Because the IN’s trip let node consists of the doublet, which seals information 2 g o * a o 2 δ , and the third segment that transfers inform ation a + a 2 o − *2 a o δ 0.70535 Nats to the node, the total node’s information is a ( 2 o γ )+ a ( γ ) 1.0157 1bits. • Comments 4.3. The triplet’s both regular and needle controls produce four switches (fig. 1a), which carry information 4 bits. Since each switch can encode one bit’s symbol , or a single letter, it follows that a triplet is a carrier of a f our letter’s information cod e. This is a tr iplet’s genetic code, initiated at the triple t’s formation. Therefore, the generation of an external code with the same inform ation switches, a pplied to the given initial eigenvalues spectrum, would be able to restore the tr iplet’s inform ation structure. This means that such a code might reproduce th e triple t’s dynam ics, which the genetic code had encoded. • Comments 4.4. The triplet’s information structur e could serve as an information m odel of a DNA’s triplet, which is the carrier of the DNA four le tter’s code. • Finally, a time-space sequence of the applied controls, ge nerating the doublets and tr iplets, represents a discrete control logic , w hich creates the IN's code as a virtual communication language and an algorithm of minimal program. The code is formed by a time-space seque nce of th e applied inner controls (both regular and needle), which autom atically fixate the ranged discrete m oments (DP) of the successiv e equalization of the model's phase speeds (eigenvalu es) in the process of generation of the macrom odel's spectrum. The equalized speeds and the corresponding m acrostate's variables are memorized at the D Ps by the optimal control actions. A selected sequence of th e minimal nonrepeating DPs, produced by a minimal ranged spectrum of eigenvalues and the corresponding initial macrostates, generates the optimal IN's code, which initiates the ranged sequence of the IN's doublets, cooperating sequentially into triplets. The optimal code consists of a sequence of doubl e states, memorized at the { 1 1 } n ii t − = DP, and each n dimensional model is encoded by n intervals of D P. Each triplet joins and memorizes thre e such discrete intervals, gene rating the triplet’ s digital code The above results reveal the procedure of the transformation of a dynamic info rmation into a triplet’s cod e . • Cooperative information processe s (m acro associations) produce information geomet rical structures [6] , distributed in space (like various forms of pictures, images, etc.), which express th eir information contents independently on physical medium that m aterializes this info rm ation. This information geometry generally 21 takes shapes of Riemann geometry, where an inf ormation process proceeds along a geodesic line on a particular surface, and a vector-speed depends on the surface’s curvature [7]. An information attraction arises between these inform ati on geom etrical structures, whose intensity is defined by the Riemann curvatures of the interacting info rmation structures ’ geometries. Thus, the space curvature is both a result of cooperative macrodynamics and a measure of the information attraction, which, potentially, is physically materialized in gravitation. Moreover, a such information surface consists of a cellu lar geometry, where each cell encloses a code sym bol, an d whole surface’s structure enfolds its a genetic code, which could be transm itted to other inform ation structures d uring a mutual a ttraction and comm unication. (Such a cell is an elementary inf ormation m odel of a graviton) [7] . Any naturally made curvature conceals genetic in formation, w hich is a source of the curvature form ation. 5. The IPF macromodel’s singular poin ts and the singular trajectories For the considered concentrated and distributed m acromodels, the analysis of the existence and uniqueness of singular points and/or the singular trajectories represents the prin cipal and sufficient interest. The pair equalization o f the relativ e phase speeds at ea ch discrete point for the concentrated model (2.26) leads to singularities of the dynamic operator, that follow from the consideration below. We will show that the d ynamics and geometry at each of the singula r points of th e spatial m acromodel [8] are bound, and the singularities are associated wi th the model’s cooperative phenom ena. Such analysis we provide for the model in partial derivations of the f irst order: 11 1 2 2 ; xx y y AB v A B tl t l ∂∂ ∂ ∂ =+ = + ∂∂ ∂ ∂ 2 v (5.1) with variables (, ) x y , spatial coordinate , time , controls , , and coefficients l t 1 (, , ) vt l x 2 (, , ) vt l y 11 (, , ) AA t l x = , 11 (, , ) B Bt l x = , , 22 (, , ) AA t l y = 22 (, , ) B Bt l y = . The initial conditions are given by the following distributions 1 |( ) ( , , , l ts o ) x lx s l l ϕ τ = == , or 1 |( o t ll ) x t ϕ = = ; 2 |( ) ( , , , l ts o ) y ly s l l ϕ τ = == , or . (5.1a) 2 | o t ll y ϕ = = ( ) t The equations (5.1), (5.1a) characterize a reflection of som e region of plane ( on a region of space , where the peculiarities and class of the surface , ) tl ( , , ) Sx y Δ ( , ) SS x y Δ =Δ are completely defined by a specific equation of the reflection. At the known solution of problem (5.1, 5.1a), this surface’s equ ation can be defined in a parametrical form: ( ,) , ( ,) , [ ( ,) , ( ,) ] x xtl y yt l S S xtl yt l == Δ = Δ . (5.2) For the given system, a singular p oint of the second or der of the considered su rface is determined by the condition of decreasing the following matrix’s rank: ,, 2 ,, xy S tt t rank xy S ll l ∂∂∂ Δ ∂∂ ∂ ≠ ∂∂∂ Δ ∂∂ ∂ , (5.3) which corresponds to the identical tu rning to zero all minors of the seco nd order of the above matrix. 22 By introducing the radius-vector with the orths of basic vectors and the derivatives 12 rx e y e S e =+ + Δ GG G G 3 3 {} ii e G , tl r rr tl ∂ == ∂∂ GG GG r ∂ 0 , we write (5.3) in the form [] tl rr ×= GG . (5.3a) The equation of a normal N → to surface (5.2) has view : 12 3 ,, ,, ,, ee e x yS N tt t x yS ll l ∂∂∂ Δ = ∂∂ ∂ ∂∂∂ Δ ∂∂ ∂ GG G G , with the orth of the norm al N → : = n → N → N . (5.4) Because are the tangent vectors to the coordinate lines, th e fulfillm ent of (5.2) or (5. 3) is an equivalen t of the condition of a nonexistence of the normal (5.4) at the given singular point. , t r G l r G Since a normal to a surface is determ ined independently on a m ethod of the surface param eterization, we come to the following co nditions of the existence of the singu lar point: 11 2 2 2 2 0 NM e M e M e =+ + = G GG G , (5.5) or 1 , det 0 , tt ll xy M xy == , l S S l ∂Δ Δ= ∂ , , tt x y xy tt ∂∂ == ∂∂ , , ll x y xy ll ∂ ∂ == ∂ ∂ ; (5.5a) 2 , det 0 , tt ll xS M xS Δ == Δ , t S S t ∂Δ Δ= ∂ , l S S l ∂Δ Δ= ∂ ; (5.5b) 3 , det 0 , tt ll yS M yS Δ = Δ = . (5.5c) According to (5.2) we have ; SS x S y SS x S tx t y t lx l y ∂Δ ∂Δ ∂ ∂Δ ∂ ∂Δ ∂Δ ∂ ∂Δ ∂ =+ =+ ∂∂ ∂ ∂ ∂ ∂∂ ∂ ∂ y l ∂ . (5.6) That’s why relations (5.5a-c) are fulfi lled autom atically if (5.5) holds true. Indeed, using (5.6) for (5.5b), we get () () ( ) x Sx Sy x Sx Sy S x y x y S J tx l y l lx t y t x t l l t x ∂ ∂Δ ∂ ∂Δ ∂ ∂ ∂Δ ∂ ∂Δ ∂ ∂Δ ∂ ∂ ∂ ∂ ∂Δ +− + = − = ∂∂ ∂ ∂ ∂ ∂∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ , (5.7) where Jakobian for this system 1 , (, ) det 0 , (, ) tt ll xy Dx y J xy Dt l == = M = . (5.7a) This brings the strong connection of th e system’s geometrical coordinates ( l , e ) with the dynamics of ( , ) x y . Therefore, at a chosen representation (5.2), the singular points correspond also the degeneracy of the Jacobean J, or the fulfillment of condition x yy tl tl x ∂ ∂∂ ∂ ∂ ∂∂ ∂ = , (5.7b) which for the distributed model is an analog of the equalizatio n of th e relative phase speeds (2.26). 23 Indeed. The analog of the relation (5.2) for the re lated m atrix of the concentra ted system is , 2 , dx d S dt dt rank dy d S dt dt Δ ≠ Δ , which leads to , det 0 , dx d S dt dt dy d S dt dt Δ = Δ ,or () 0 d S dx dy dt dt dt Δ = , 0 dS dt Δ ≠ , dx dy dt dt = . (5.7c) − The last relation at ( ) ( ) 0 xx o τ τ +≠ coincides with (2.26) at t τ = . Let’s apply (5.7b) to the system (5 .1), written in the diagonal form: 11 11 1 () () ( , ) tl xx vt l tl λλ −− ∂∂ =+ ∂∂ ; 11 22 2 () () ( , ) tl yy vt l tl λλ −− ∂∂ =+ ∂∂ , (5.8) where 11 22 (, ) , (, ) tl tl λ λλ λ are the corresponding eigenvalues. For the diagonalized equations , it is pos sible to build the system of the regular differentia l equations in a symm etric form, genera lly 11 () () i tl ii dx dt dl v λλ −− =− = i , i=1,….,n (5.9) with its common integral s Φ i = Φ ( i φ 1 i , φ 2 i )=0 , (5.10) where the first integrals: φ 1 i = , () () lt ii ld l td t λλ + ∫∫ 2 () i il i i x lv d l φλ =+ ∫ (5.10a) are the solutions of (5.9). The concrete form of the common integral is defined by (5.1a) and (5.8): Φ = φ 2 ( ) [( ) ] [( ) ] [( ) ] l f lf v f d f l ∂ λ φφ φ ϕ φ ∂φ − ∫ + , (5.11) where 1 () d τ φ φλ τ =− ∫ τ ( ) ( ) ( ) lt ld l td t d τ = λ λλ τ +− ∫∫∫ τ ( ) ( ) | t t dt d t τ ; τ λτ τ λ = = ∫ ∫ , 12 12 (, ) , (, ) tt t ll l λ λλ λ λλ == , 12 (, ) x xx = , , 12 (, Φ= Φ Φ ) 12 11 1 (, ) φ φφ = 12 22 2 ,( , ) φ φφ = (5.11a) and f is the root of equation f = l ( φ 1 ( τ ), τ ) solved for l and a fixed t = τ : 1 () ( ) l ld l d τ λ φλ τ τ =− ∫∫ , 11 | t τ φφ = = . (5.12) A partial solution of (5.9, 5. 11-5.12) acquires the form: x = ( ) ( )[ ( ))] [ ( ))] [ ( )]. ll l f lv d l l f v f d f ∂ λ λφ φ φ ϕ φ ∂φ −+ + ∫∫ (5.13) Then, the corresponding partial derivations have the view : 1 11 1 lt xv dl tt ∂ ∂ λ λ ∂∂ =− + Φ ∫ , 2 2 l v y dl tt 2 2 t ∂ ∂ λ λ ∂∂ =− + Φ ∫ (5.14) 11 11 l x v l l ∂ λ λ ∂ =− + Φ , 22 2 2 l y v l l ∂ λ λ ∂ =− + Φ . (5.14a) 1 11 1 1 1 11 [( ) ( ) ] l l 1 f v f ∂ ϕ∂ φλ φ ∂ ∂φ Φ= + (5.15) By imposing the condition (5.7b) on the systems (5.14-5.15), we come to equation 24 12 11 1 2 2 2 2 2 2 2 1 1 () ( ) ( ) ( lt l l l t l vv dl v dl v tt 1 1 ) l ∂ ∂ λ λλ λ λ λλ λ ∂∂ −Φ −Φ = −Φ −Φ ∫∫ (5.16) which is fulfilled at the following cas es: 1 0, l λ = or , (5.16 a) 2 0 l λ = and at the different combinations of the following pairs of th e relations: 1 v = Φ 1 2 , or ; (5.16b) 2 v = Φ 1 11 () lt v dl t 1 ∂ λ λ ∂ −Φ ∫ =0, or 2 22 2 ( l v dl t ) 0 t ∂ λλ ∂ −Φ = ∫ ; (5.16c) where the last two ones are correct if , (5.16d) 11 0, 0 tl λλ ≠≠ or, in particular, at the f ulfillment of any of these relations: 1 11 0, 0, 0 t v t ∂ λ ∂ =Φ = = , (5.16e) 2 22 0, 0, 0 l v t ∂ λ ∂ =Φ = = . (5.16f) Finally, we come to the condition: 12 11 1 2 2 2 11 1 22 2 () ( () ( ) lt l ll vv dl dl tt vv ) t ∂ ∂ λλ λ λ ∂∂ λλ −Φ −Φ = −Φ −Φ ∫∫ = Ι . (5.17) It means, that for the n -d imensional PDE model (5.8) could exist an invariant condition (5.17) on the solution of (5.14,5.15), which is not dependable on the indexes in (5.17), or Ι could take a constant value for some pair of the indexes. If omit the trivial condition s (5.16a-5 .16f) and the invarian t (5.17), then (5.16) leads to the following relations: 0 xx ll ∂∂ == ∂∂ , and 0 yy lt ∂∂ == ∂∂ , or 0 xy lt ∂∂ == ∂∂ and (5. 7a). The conditions (5.16) define the different equations of th e singular points , or the si nguar trajecto ries, created by any of the separated processes ( , ) x tl , or ( , ) y tl , while (5.17) defines the singul ar trajectory, created by the process’ interactions. In such singularities, th e rank of extended matrix (5.3) decrea ses that declines the number of independent equations in a system ; and a normal to a surface S Δ at a singular point does not exist. Because of the eqs.(2.16) and (5.7b) connections, thes e conditions of singularitie s are applied also to the considered in secs.2,3 concentrated models. Therefore, the singular points, defi ned by the conditions (5.16) and (5.17) do exist, and they are not sing les. The geom etrical locations of the sin gular points could be the isolated states of the system (5.1), as well as the singular trajectories. The invariant Ι corresponds to the equalization of the loca l subsystems relative speeds (a t the phase trajectories) at transferring via the singular curve, being an analog of the condition (2.16) for the concentrated model. At these points, relation (5.16 b) gets the form 1 11 1 1 1 11 [( ) ( ) ] l l f v f ∂ϕ ∂ φλ φ ∂ Φ= + = 1 1 v ∂ φ , (5.18) and it is fulfilled along the singular trajectory, in particular, at 11 1 1 co , , lt l nst t l τ 1 λ φλ λ λ τ == + − 1 11 1 1 1 () ( / ) ( ) , lt l fl t φ λλ λ − == + − τ ( ) 1 1 1 1 () l f ∂ λ ∂φ − = , (5.18a) which is satisfied at 25 1 11 11 1 1 1 |( ( , ) ( , ll l lf vt l vtf fl ∂ϕ ∂ϕ λ ∂∂ = == − ) ) ) o ) . (5.18b) This condition binds the automatic f ulfillm ent of (5.6b) (at the m acrotrajectories’ s inglular points) with the initial distribution (5.1a) (depending on the model’s micr olevel). That’s why relations (5.16b-f), (5.17) m ight be considered as the limitations imposed on the class of the mode l’s random processes, for exam ple, applicable for Markov fields. At a given random fiel d, which not satisf ies these limitations, the conditions (5.16b.c) could be fulfilled by choosing the corresponding contro ls. At , in particular at , a possibility of the Jacobean degeneracy, as it follows from (5.18), is also cove red by relations (5.18b). 1 1 var λ = 1 11 () v λ From that follows that the model’s singular ph enom ena could be implemented by the controls. Therefore, the singular points and trajectories carry out the additional information about connection of the micro- and macroprocesses, the m odel’s geometry, dynamics, and control. Because relations (2.16 ), (5.7a) are the conditions connecting the extremal’s segments at the o -window, the singularities ar e related also to the model’s cooperative phenomena. The state conso lidation at the singular points is possible. The detail analysis of th e singular po ints is provided in [4] for a two dim ensional concentrated model, where is shown that before the consolidation, the m odel has a saddle singular point, and after the consolidation its singular point becomes an attractor. More generally, the equalization of the subsystem’s eigen frequencies (connected to the eigenvalues) (in (2. 16), (5.7)) is an indicator of arising oscillations , which, at the superposition of the diffusion at the o-window, are faded into an attractor . Actually, applying just a regular control (as a first part of the needle control) at the m odel’s o -window transfers the dyna mic trajectories at the macrolevel to the random trajectories at th e microlev el, while both of them are unstable. Applying a second regular control (being a second part of the need le control) br ings stab ility to both of them. Generally the model undergoes a global bifurcation at the o -window between the segm ents, under the con trol actions and by transferring from kinetics to a diffusion a nd then from the diffusion to kinetics. Indeed. At the extremal’s ending mom ent we have 1 () () () ( u ao b o r o x ττ τ τ − −= − − − , (5.19) where are the diffusion component of stochastic equation, w hich are compensated by the kinetic p art, delivered with the regular control. 1 () () ( x bo r o D o ττ τ − −− = − The needle control, applied between the segments at the moments (, ) o τ τ + , brings the increment , () ( ) () () ( )( ) uu u aa a o x o x o δτ τ λ τ τ λ τ τ =− + + =− + + + ( o ) 0 λ τ − < sign , () ( ) sign o λ τλ τ = −− , (5.20) which at () ( ) , () ( xo x o ) , τ τλ τ λ τ +≈ +≈ − determines 2 ( ) ( ) u ax δ λτ τ =− . Thus the needle control decreases the initial duff usion part ( ) ( ) x Do o τ λτ − =− according to relation 1 () () () ( ) 2 ( ) ( xx bo r o D o D ) τ τ τ τ λτ λτ − + + = + ≈−≈ − , transferring the diff usion into kinetics. This means that applying of the needle controls to a sequence of the extrem al segments increases an influence of the kinetics on the model, de creasing the diffusion com ponents. 26 6. The natural variation problem, singular traj ectories, and the field’s invariants for the IP F The information functional (IPF) of the distributed model in the form : G S Ldldt Δ= ∫∫ , (6.1), , x x LX tl ∂∂ =+ ∂∂ X (6.1a) is defined on the controlled processes { } i x x = , which are determined by the so lutions of Eule r-Ostrogradsky’s equations for this functional and the natural border conditions, connected with the initial distributions (5.1a). Using the eqs (2.33a) we will use expression fo r 1/ 2 Xh = x 1 ,, ,[ o T xsl hr r E x x − == , (6.1b) in the Lagrangian (6.1a). ] The problem consists of synt hesis of a control law ( , , , ) tl vv t l x x = that carries out the fulfillm ent of extremal principle for the function al at the natural border conditions. This problem , which is called the natural variation problem, we solve for the equations having the stru cture (5.1) at the Lagrangian L in form (6.1a). S Δ This problem is aimed at its formal connection to an appearance of a singular cu rve (sec.5). Writing the functional’s variation at a variant control’s definition domain G , according to [23], we have 2 12 1 {[ ( ) ( ) ] } i i ii t i l G LL L S S S x dldt xt x l x δδ δ δ = ∂∂ ∂ ∂ ∂ Δ= Δ + Δ = − − + ∂∂ ∂ ∂ ∂ ∑ ∫∫ 2 1 {[ ( ) ( ) ] ( ) ( ) } 0 i i it il G LL x Lt Ll d l d t tx lx t l δδ δ = ∂∂ ∂∂ ∂ ∂ ++ + = ∂∂ ∂∂ ∂ ∂ ∑ ∫∫ ; i x δ 2 12 1 ;, i ij j j x x ll t l l l δδ = ∂ = += ∂ ∑ = (6.2) Condition 1 S δ Δ =0 is fulfilled by the execution of Euler-Ostrogradsky’s equa tion [18] i L x ∂ ∂ () ( it il ) L L tx lx ∂∂ ∂∂ −− ∂∂ ∂∂ = 0, (6.3) which for (5.1) and (6.1a,b),(5.2) acquires the forms () () ii i i ii i i ii i ii i ii i ii i ii xx h xx h hx hx h x h x tt x ll x t l ∂∂ ∂ ∂∂ ∂ ∂∂ ++ +− − ∂∂ ∂ ∂∂ ∂ ∂ ∂ 0 = ; ii i ii i ii ii ii hx hx h h x tx l t l ∂∂ ∂∂ ∂ ∂ += + ∂∂ ∂∂ ∂ ∂ (0 , ( , ) i ) x lt G ∀≠ ∀ ∈ . (6.4) We get the equation of extremals 0; 0; ii ii ii ii hh r r tl t l ∂∂ ∂ ∂ += + = ∂∂ ∂ ∂ (6.5) At the solutions of this equation ho lds true the relation [] [ ( ) ] 0 i i i ii i it il EL E h x x x =+ ≡ . (6.6) The condition 2 2 0 i L x ∂ ≠ ∂ at an extremal determines the regular, o r not the singular ex tremal’s points. For the linear regarding ( , , ) tl x xx form (6.1a) is fulf illed 2 2 0 i L x ∂ = ∂ (, ) lt G ∀ ∈ , and the obtained extrem als are a non regular. At these extremals, the differential equations (6.3 ) turn into th e parametrical equation s for the functions (6.4, 6.5,6.6) determined via ii h ( , ) { ( , )} i x tl x tl = in (6.1a). Applying the differe ntial equation with the contro l processes{ ( , ) ii x xt l = }, the piece-wised controls { }, and random initial conditions, let’s find the control in (5.1) for which the solutions of equations (5.1) satisfy to (6.5). Using (6.5) as the initial condi tion for the control synthesis, we get i v 27 () ] [ ] 0 ii ML = t t i ii i i i l i i l l i Lh x x u x λ λ =+ + ; at [1 [] t tt iit i ii i l ii i r uv Ex λ λ λ ] . = =− + (6.7) The same way we find [1 [] l ll iit i ii i t ii i r uv Ex λ λ λ == − + ] . (6.8) From these relations also follow s the representation of the control f unction , which corresponds to the control’s form in the initial equations (5.1). ( , ), 1 , 2 ii vv t l i == Let us specialize th e above control, acting both within the controls definition’s dom ain G and at its border G ∂ , for example, a square. At these controls in G might exist a geometrical s et of points, where the pa rtial derivations of eq.(5.1) get a first kind of the discont inues. For a simplicity, let us consider a monotonous smooth curve 5 γ (fig.2) as such a set. Gene rally, such a curve does not cro ss the above border, and we can prolong this curve by two auxiliary curves 2 , 4 γ γ up to G ∂ is such a way that the obtained 245 γ γγ ∪∪ 2 G ∂ will be a monotonous curve (leaving the method of continuation being an arbitrary). As a result of these formations, the initial two bound domain splits on two single subdomains with the borders accordingly (fig . 2). Because the curve 1 , GG 2 1 , G ∂ 5 γ is a priory unknown, the above subdomains are variable. The following relations formalize the considered domain ’s and subdom ains’ descriptions: 5 GG G γ =∂ ∪∪ ; 1 224 GG G γ γ = ∪∪ ∪ ; 13 6 G 7 γ γγ γ ∂= ∪∪ ∪ ; 11 1 1 1 3 12 546 G GG G =∂ ∪ ; 1 γ γγ γ γ γ ∂= ∪∪ ∪ ∪ ∪ 22 GG G 2 = ∂ ∪ ; (6.9) : o ok t t s const ll l γ == = ⎧ ⎨ ≤≤ ⎩ ; 33 1 3 2 γ γγ = ∪ , 31 12 3 :; o o l l const ts t t γ == ⎧ ⎪ ⎨ =≤ ≤ ⎪ ⎩ where at , equation 2 12 3 (, ) 0 : o Fl t t ll = ⎧ ⎨ = ⎩ ( , ) 0, 2, 4 m Fl t m = = describe any curve in the domain considered below for 32 12 3 : o k l l const tt t γ == ⎧ ⎪ ⎨ ≤≤ ⎪ ⎩ ; 66 1 6 2 γ γγ = ∪ , 61 12 6 :; k o l l const ts t t γ == ⎧ ⎪ ⎨ =≤ ≤ ⎪ ⎩ where , (6.10a) 4 12 6 (, ) 0 : k Fl t t ll = ⎧ ⎨ = ⎩ 62 12 6 :; k k l l const tt t γ == ⎧ ⎪ ⎨ ≤≤ ⎪ ⎩ 7 : k ok t t const ll l γ == ⎧ ⎪ ⎨ ≤≤ ⎪ ⎩ . i n t − ∪ The border domain has the form 12 2 2 5 4 2 5 4 i n t [( ) ( )] GG G G γγγ γγγ +++ −−− + Γ= ∂ ∂ =∂ = ∂ Γ Γ ∪∪ ∪ ∪ ∪ ∪ ∪ ∪ , (6.10b) where is an internal part of domain , + and – mean the particular curve’s movement along the above domains accordingly. int Γ Γ Let us implement the border condition 2 0 S δ Δ= using Green’s form [23] and the above relations: 12 12 12 21 12 () G PP dl dl P dl P dl ll Γ ∂∂ += − + ∂∂ ∫∫ ∫ (6.11) 11 2 2 22 2 2 2 2 11 l L l P 2 2 11 1 1 1 1 ;; jj ij i j ii i i i i il il j il il j xx LL L L Px x l L l xx l x x l δ δδ δ δδ == = = = = ∂∂ ∂∂ ∂ ∂ =− = − + ∂∂ ∂ ∂ ∂ ∂ ∑∑ ∑ ∑ ∑ ∑ + Applying relations (6.11) to func tional (6.1, 6.1a), we come to 22 2 2 2 12 12 2 1 11 1 1 [( ) ] kk j ij k ki i j ki l i l j G x LL L Sx l L l d l d l lx xl δδ δ δ == = = Γ ∂ ∂∂ ∂ Δ= − + = − + ∂∂ ∂ ∂ ∑∑ ∑ ∑ ∫∫ ∫ P d l P d l 28 Because of the i x δ , i l δ arbitrariness we get 12 21 0 G Pd l P d l ∂ −+= ∫ , (6.12a) . (6.12b) int int 12 21 0 Pd l P d l +− ΓΓ −+= ∫ ∪ The first of them (6.12a) leads to the natura l border conditions at the external border of . G For example, in the following forms: |0 ts it L x = ∂ = ∂ (,) 0 ii hs l ⇒= , ; 0 i x ≠ |0 k tt it L x = ∂ = ∂ , , ( , ) ii k ht l ⇒= 0 i x 0 ≠ ; |0 o ll il L x = ∂ = ∂ , , ; ( , ) ii o hlt ⇒= 0 i x ≠ 0 |0 ts it L x = ∂ = ∂ , ,0 ( , ) ii k hl t ⇒= 0 i x ≠ . (6.13) The second relation (6.12b) l eads to an analogy of the Erdman-Weier strass’ conditions [24] at the curve 5 γ . Indeed. Because 2 γ , 4 γ are the arbitrary (being the virtual) curves, at crossing them , the partial derivations are continuous, and the integral, taken along the opposite directions, is equal to zero. From that for (6.12b) is fulfilled int int 5 5 12 2 1 12 21 Pd l P d l Pd l P d l γγ +− + − ΓΓ −+=−+ ∫∫ ∪∪ . (6.14) Suppose the curve 5 γ can be defined by equation . * () ll t = Then integral (6.14), written in a single (arbitra ry) direction, acquires the f orms: 2 1 5 * 12 2 1 1 1 2 () Pd l P d l P l P d t τ τ γ −+= − + = ∫∫ 0 ; ( 6 . 1 5 ) 2 21 2 1 1 2 1 22 2 2 2 2 2 ** * 21 11 1 1 1 1 1 22 2 1 {( ) [ ] [ ] } 0 ii i i i ii i i i i i il il il il il il xx x x LL L L L L lx L l l L l l d t xl x xl x l xl x l τ τ δδ == = = = = = ∂∂ ∂ ∂ ∂ ∂∂ ∂ ∂ ∂ ∂ −+ − + + − + − ∂∂ ∂ ∂∂ ∂ ∂ ∂ ∂ ∂ ∂ ∑∑ ∑ ∑ ∑ ∑ ∑ ∫ 1 . i x δ = Writing the integral in the opposite di rections at the arbitrareness of i x δ , i l δ , we get the system of four equations along 5 γ : * ii i ii i ii i ii i hx l hx hx l hx −− + −= − * + ii ii hh −+ = , , , 1, 2 i = * 1 l = ,( 1, 2 i = ), (6.15a) 22 ** 11 22 11 22 11 () ( ii ii i ii i ii xx ) x yx L h x l h x hy L h x l h x hy ll l ll −+ −− + −− − − ++ + + == ∂∂ ∂∂ ∂∂ −+ + = − + + ∂∂ ∂ ∂∂ ∑∑ y l + ∂ , (6.15b) where the indexes +and– indicate the functions’s values from the domains and accordingly. Substituting (6.15a) into (6.15b) we come to the system of equa lities, which determine a ju mp of the Lagrangian on 1 G 2 G 5 γ : ** 11 11 22 22 () ( ) ( ) ( ) , x xy LL x h l h y h l h ll ll −+ − + −+ − − − − ∂∂ ∂ ∂ −= − − + − − ∂∂ ∂∂ y ) . ** * 11 11 22 22 () ( ) ( ) ( ) ( tt t t L L l xh lh x x yh lh y y −+ − − + − − − + − −= − − + − − (6.16) The obtained relations are equivalent along the curves * tt t t ll l l xx y y l xx y y +− + − +− + − == = = ; (6.16a) (, ) (, ) 0 (, ) (, ) Dx y Dx y Dt l Dt l +− +− == , (6.16b) which stitch the solutions of (5.1) at the singular surface S Δ (5.2). According to (6.16a) the controls at the singular curve become bound: 29 1 11 12 1 2 22 2 1 2 [1 ] ] [] [] [1 t tl l t t l l r uE x ur E x λ λ λ λ + = + , * 12 2 1 1 2 21 1 2 2 1 [] [] [] tl t l tl t l v l vE λλ λ λ λλ λ λ + = + E x x . (6.17) Let’s assume that along the singular curve the co nditional probability density is defined by a δ -distribution. Then, according to the features of δ -function, we get the equivalent relations: * it il x l x ± ± =− and * 11 22 t l r l r ± ± = , and the relation for the Lagrangian’s jump we write in th e form ** 11 22 (1 ) ( ) (1 ) ( ) ll l l L xh l x x yh l y y −− + − − Δ= + − + + − + 11 22 (1 ) ( ) (1 ) ( ) tt ll l l ll xy x hx x y h y xy −− y − −+ − − + −− =− − + + − (6.17a) at . ,1 , 2 ii ii hh i −+ == Because on 5 γ holds true [ ] ii i x Mx = , [] it i it x Mx −− = , [ il il ] x Mx = ,the following equality is corre ct: 1 22 11 22 ) (1 ll ll rr r −− + + + 1 ) l r + 11 11 1 1 11 (1 ( t r r −− − Δ= + − 22 22 22 ) ( t l r r r − − − − ) Lr . (6.18) According to (6.16a) and (6.18) we get 11 22 11 22 t ll rr rr ±± ± = t ± (6.18a), 11 11 11 1 1 11 22 22 22 11 [1 ( ) ] ( ) ( ) t ll l l l r Lr r r r r r ±− − + − − + Δ= + − + − r . (6.18b). Now we can determine the functional’s valu e at the extremals of equation (5.1 ): 5 11 22 11 22 int () ( ) ( ) tl t l GG G S Ldldt xX xX yY yY dldt xh x yh y dl xh x yh y dt S S γ Γ ∂ Δ= = − + + + + − − + − = Δ + Δ ∫∫ ∫∫ ∫∫ ∪ , (6.19) where ,, , tl t l X XY Y are the corresponding covariant functions, int S Δ is an internal f unctional’s inc rement, S Γ Δ is a border’s increment, and the Lagrangian is represented by the sum : 1 2 11 11 22 22 tl t l t l t l LL L x h x x h x y h yy h y x X x X y Y y Y =+ = + + + = + + + = () () tt ll x X y Y x X y Y x Xy Y x Xy Y tt ∂∂ ++ +− − − − ∂∂ . According to (6.6) we have 2 int 1 [( ) ] ii i t ii i t l i G Sx h x x h x = Δ= − + ∑ ∫∫ d l d t and 2 1 int 1 [] { } ( ) } iit ii il ii i G ES r r r d l d t − = 0 Δ =− + ≡ ∑ ∫∫ , (6.19a) where 12 {, } E EE = is a symbol of mathem atical expecta tion acting additively on the Lagrangian; 2 55 1 ** 21 1 2 ˆ [] 2 2 2 ( 1 ) 2 [ ( ) ( ) ( ) G S M S d ld t d ld t l d t l l τ γγτ * ] τ τττ ΓΓ ∂ Δ= Δ = − + = − + = − = − + − ∫∫ ∫∫ ∫ ∪ l .(6.19b) At we get , which brings to a total entropy’s incremen t in the optimal process equals to zero, where corresponds, in particular, the fulfillment of * 1 l = * l 0 S Γ Δ= 1 = , tl t x xy y = = * l > , i.e. appearance (according to (5.7b,c) a singular curve by the equalization of the above phase speeds. At the entropy’s increment is positive, at the increment is negativ e. 1 * l 1 < Let us build at an ε -locality of the singular curve a dom ain ; 12 GG G = ∪ ** 12 12 63 0| | : () () ll G to t t t o t ε ⎧ <− < ⎪ ⎨ − Δ≤ ≤ + Δ ⎪ ⎩ . 30 Then the relations (6.7), (6.8) in and holds true, specifically in the f orms: 1 G 2 G 11 11 1 : [ 1] , [ 1] [] [] tl tl ii l i ii t i ii l ii i ii i rr Gu u Ex Ex λλ λλ =− + =− + t ; (6.20a) 22 22 2 :[ 1 ] , [ [] [] tl tl ii l i ii t i ii l ii i ii i rr Gu u Ex Ex λλ λλ =− + =− + 1 ] t , (6.20b) where the lover indexes 1,2 at indicate that these functions belong to the and accordingly. Using these relations we find the control’s jumps: ,, , tl iit iil i i rr u u 1 G 2 G 21 12 1 2 () () [ [] t ttt t ii l ii l i iii i ii l ii i rr uuu vv Ex λ λ λ − Δ= − = − = + 1 ] , (6.21a) 21 12 1 2 () () [ [] l lll l ii t ii t i iii i ii t ii i rr uuu vv Ex λ λ λ − Δ= − = − = + 1 ] + . (6.21b) Therefore, in a general case, there exist the jumps for bot h the controls and Lagrangian (according to (6.18b) at crossing the singular curve. These jumps can be found if the derivatives of th e corresponding correlation functions are known. The conditions in particular, for the concentrated systems (at , iit iil iit iil rr rr −− + == 0 iil r = ) acquire the forms [( ) ( ) ] T iit rM x x o ττ − =+ , , 11 [( ) ( ) ] T iit rM x x o ττ + = + 1 o τ τ = + ; (6.22) iit iit rr −+ = , [ () ( ) ] [ () ( ) ] 0 ii i ii i M x xo M x xo τ ττ τ ++ + = ,( ) ,( ) ; x xx o xx x ττ − +− + = += = .(6.22a) From that we have () ( ) x xo τ τ =+ and ( ) ( ) x xo τ τ = −+ . Thus, at crossing the si ngular curve, or a singular point, () x τ changes sign. If () () ( () () ) xx v τ λτ τ τ =+ , () ( ) ( ( ) ( xo x v ) ) τ λτ τ τ + =+ () ) () ( () ( xv () ( () xv then the control, at crossing the singularity, is found from relation ) ) λ ττ τ λ ττ τ + =− + , or () 2() vx τ τ = − and 1 2 1 () () vx τ τ =− , which determines the needle control (sec.2 ): 1 () ( ) () vv v τ τδ τ − =− . The control’s strategy th at solves th e natural border problem consists of: -the movement along an extremal (6 .5) by applying controls (6.7,6.8), being the functions of the initial distribution (5.1a), up to the moment of time when the conditions (6.16a) are fulfilled and the controls become bound by (6.17); -the movement along a singular curve (at the contro l’s jump) until the condition (6.16a) is vio lated; -the movement’s continuation along the above extremals with the controls (6.7,6.8). The following proposition summarizes the results. Proposition. The natural border problem’s solu tions for the path functional with the model (5.1,5.1a) are both the extremal (6.5) and the singular cur ve of this equation, for which (6.16a ) holds true and the controls are bound according to (6.20a,b). Along the singular curve (and/or the singular points) the initial model’s dimension is shortening and the state’s cooperation takes place. All these results follow from the solution of variation prob lem for the inform ation path functional (sec.2). According to the initial VP, the IPF’s extremals hold the p rinciple of stationa ry action . This allows us to find the invariant conditions, as the mode l field’s functions, being the analog ies of the inform ation form of conservations laws. Following to the Noether the orem [ 24] and the results [8] we come to 4 4 1 11 [( ) ] 0 L y = (/) n i mk k im ik m x L Qy xl l = == ∂ ∂ =− + ∂∂ ∂ ∂ ∑∑ , 4 , k k l y l t t ∂ = = ∂ . (6.23) Let’s have a four dimensional volume Ω limited by a surface 4 Σ : 31 4 12 Σ= Σ ∪ Σ ∪ Σ ; ; 3 44 () ( la l b Σ= Σ ∩ > ∩ < ) 11 2 3 4 2 1 2 3 4 (( ,, ) 0 ) ( ) ; (( ,, ) 0 ) ( ) F l ll l a F l ll l b Σ= ≤ ∩ = Σ = ≤ ∩ = , (6.24) where a,b are the auxiliary fixed moments of time; 3 Σ is a no self-crossing surface defined by eqs 12 3 (, , ) Fl l l =0; is a four dimensional cylindrical su rface limited by two parallel planes , 4 Σ 4 la = 4 lb = , where the cylinder’s vertical is in para llel to the time axis and the ba si s is a geometrical space of points 3 Σ . After integrating (6.23) by Ω , applying the Ostrogradsky-Gauss theorem [23], we get 4 4 (, ) 0 div Qdv Q n d σ + Ω Σ = ∫∫ 4 = , (6.25) where is a positive oriented externa l normal to the surface n + 4 Σ ; 4 d σ is an infinite small elem ent of 4 Σ . Integral (6.25) is represented by th e sum of the following inte grals, ta ken by the two cylinder’s button parts 1 Σ , and its sidelong part Σ of : 2 Σ 4 Σ 44 4 3 12 4 3 11 1 2 2 1 1 ( ,) ( ,) ( ,) ( ,) [ ( ,) | ( ,) | ] lb l a G Qn d Qn d Qn d Qn d Qn Qn d v σσ σ σ +− + + + − == ΣΣ Σ Σ =+ + = − ∫∫ ∫ ∫ ∫ + 2 (, ) Qn d σ + Σ ∫ 44 3 3 11 [( , ) | ( , ) | ] lb la G Qn Qn d v ++ == =− ∫ + 2 (, ) Qn d σ + Σ ∫ , (6.26) where is a positive orien ted external normal to the bottom part 1 (0 , 0 , 0 ,1 ) n + = 2 Σ of the surface 4 Σ ; 4 lt = ; is a negative (interna l) normal to the bottom part 1 (0 , 0, 0, 1 ) n − =− = 1 n + − 1 Σ of 4 Σ ; is a positive external norm al to ; is a part of space being a projection of ( 22 1 (, nn ++ = 2 2 , n + 2 3 , 0 ) n + 1 Σ 3 12 3 4 (( ,, ) 0 ) ( 0 ) GF l l l l =≤ ∩ = Σ , 2 Σ ) on a hyper plane ; is an infinite small elem ent of volum e ; 4 0 l = 3 dv 3 G d σ , 1 d σ , 2 d σ are the infinite small elements of the surfaces , , accordingly. Σ 1 Σ 2 Σ Let us implement (6.26) at the usual physical assum ptions, supposing that both the function and the field are decrea sing fast on infinity. 222 12 3 1 1 1 (, , ) o Fl l l l l l R = ++− 2 This means that at and , the integral by o R →∞ 2 ~ o dR σ Σ in (6.26) can be excluded. Then (6.26) according to (6.25) acqui res the form 4 3 11 ( , )| ( , )| lb la Qn d v Qn d v +− = = ∫∫ 4 3 = (6.27) where the integral is taken by an infi nite dom ain. Because of the auxiliary a and b , the above equality m eans the preservation in tim e the values 4 3 11 (, ) [ ( ) ] (/) n i m im ik m x L Q n dv y L dv xl l + == ∂ ∂ =− 3 ∂∂ ∂ ∂ ∑∑ ∫∫ + , 4 , m m l yl t t ∂ = = ∂ . (6.28) Applying the Lagrange-Hamilton eq uations we get invariant 33 3 3 12 3 dv dl dl dl = 11 [( ) ] ( ) ( 2 ) (/ ) nn ii i ii i xx L Ld v X Ld v H L d v i n v xtt t == ∂∂ ∂ += += + = ∂∂ ∂ ∂ ∂ ∑∑ ∫∫ ∫ T , , (6.29) which at (sec.2) leads to the invariant , 1 / 2 T HL x X L x X −= − = 3 1 ,1 / 2 ( , ) n i i i x l Hd v i n v H X t l = ∂ ∂ == ∂ ∂ ∑ ∫ , (6.30) preserving the volume’s Hamiltonian of the info rmation path f unctional. 32 7. The connection between the entropy’s (information ) path functional (IPF) and the Kolmogorov’s (K) entropy of a dynamic system, between the Kolmogor ov’s and the macrodynamic complexities, and the relations to physics The K-entropy is an entropy per unit of time, or the entropy production rate, measured by a sum of the Lyapunov’s characteristic exponent (LCE) [5, 25-27]. LC E describes a separation between the process’ trajectories, created by the pr ocess dynam ic peculiarities. In the IPF model, the separation is generated by the i nner controls actions, which carries out the transitions between the process’ dimensions, physically associated with the phase transforma tions, singularities, chaotic movement and related phys ical phenomena [28-31]. Let us find the LCE for the IPF model. At the DP, each of thes e controls swiches the p rocess’ extremal segm ent (with an eigenvalue i λ ) from a local m ovement exp( ) it io i x xt λ = − ) , corresponding a local process’ stability, to the local movement exp( ), ( , ii o i x xt t ττ τ o λ τ =∈ τ − , (7.1) corresponding a local process’ instab ility, which brings a separation betw een these two process’ movem ents. Here io x is an initial condition at a beg inning of the i -segm ent; with the m acroprocess’ eigenvalue i λ , io x τ is a starting state at the moment to τ =− (near the segm ent end), i τ λ is the eigenvalue at o τ − approaching τ (which depends on () o gradX τ − , sec.2) that potentially initiates these dynamics, approximating the between segment’s stochastics at t τ → ). The LCE is measured by a mean rate of exponentia l divergence (or convergence) of two neighboring trajectories : one of them describ es an initial undisturbed m ovement it x , another one is the disturbed movement i x τ (for this model at DP). A local LCE : 1 lim ln ( ) i i t io x tx τ i τ τ τ λ σ → == (7.1a) expresses the expone ntial divergence i x τ from the movement it x along the extremal segment ( i x τ starts at the moment t= o τ − by the end of the movement | it t o i o x x τ τ =− → , which precedes the beginning of the disturbed movement i x τ ). At i τ λ >0, the process is instable and cha otic: the nearby points, n o matter how they close, will diverge to any arbitrary separati on. These points are instable. At i τ λ <0, the process exhibits asym ptotic stability in a dissipative or a non-conservative system. The LCE zero at i τ λ =0 indicates that the system is in a steady state. A physical system with this exponent is a conservative . Such a system exhibits Lyapunov stability . Although this system is deterministic, there is no process’ order in this case [30, 31]. Exponent (7.1) approximates the dynamic divergence of the extremal segments at a window between the segments; and the LCE (7.1a) characterizes the information dynamic pecu liarities arising at the DP localities. In particular, under the optimal control, applied to i τ λ at the nearest mom ent δ τ following τ , the eigenvalue changes according to equations 1 ( ) exp( )[ 2 e xp( )] ii i i i v ττ τ τ τ λλ λ δ τ λ δ τ − =− − , which at 0 δ τ → reaches a limit: 0 lim ii τ τ δτ λ λ → = − . Such a discrete (jump-wise) LCE ren ovation, is a phenom enon of a contro llable proces s, specifically at the process’ coupling, and could serve as a LCE indicator of this phenomenon. The K entropy is the nonlinear dynamics counterpart of physical the Boltzmann-Gibbs entropy [32], which is directly connected to the Shannon’s information entropy [21]. The IPF model’s DPs are the crucial points of changes in a dynamical evolution with the fixed entropy path functional’s production rates (PFR), given by the sum of positive LCE. 33 According to relation (2.36), the PFR, being equ al to the sum of the operator’s pos itive eigenvalues: 1 [( ) ] [ ( ] [ ( ) ] ( ) i n ii i S EE H T r A t ττ τ λ τ = ∂ −= = = ∂ ∑ 0 > , (7.1b) coincides with the K entropy at these crucial points. In the analogy between statis tical mechanics and chaotic dynamics. This additivity of the disc rete linear rate (at DPs) for both the K entropy and PFR corresponds to a thermodynamic extensivity of the Boltzm ann-Gibbs entrop y [33], which is important in a connection between statistical mechanics an d chaotic dynamics. The extensivity of entr opy is an essential requirement with which thermodynamics can be created [33-35]. This m ay be the case even if a system energy is nonextensive [34]. A sufficient important is the linear growth of the K entropy and the thermodynam ic extensivity of the Boltzmann-Gibbs entropy only in the long-time limit and the thermodynam ic limit, respectively. As it’s known [33], a physical quantity to be a temporally extensive should satisfy its linear grow in time. Thus, for example, the K entropy possesses the temporal extensivity f or chaotic dynam ical systems. The IPF model holds the open sys tem’s qualities such as a nonlinearity and irreve rsib ility (at the DP), and the stationarity and reversibility with in each extremal segm ent, corresponding a system ’s conservativity. These phenomena allow applying the IPF model for a wide class of real systems, which sh ow the above alternative behaviors at different stages of dynamic evolution [36,37]. Most publications on this subject are based on the m odels of the linear phenom enological irreversible thermodynamics, using an energetic approach, fluctuation fr om a stationary state, or a quasi equilibrium process [38-41]. Actual irreversible macroprocess m ight arise fr om a random movem ent at mi crolevel with a random entropy, using the information approach, while the rela tions of preservation energy could not be fulfilled. The main problem consists of math difficultie s of applying a macro evolution approach to a random process and random entropy. Some publications use an inform ationa l approach to self-organiz ation, applying a control’s parameter for an evaluation of irreversib ility in a state’s transition [42]. The equations for a controllable irreversible information macroproce ss are still unknown. The VP, applied to information path functional, defined on the solution of a controllable stochastic equation, brings the irreversible kinetic macroequation and its connection with diffusio n . Applying the Shannon’s entropy measure for a m ulti-dimensional r andom process with the statistical dependen t events leads to the unsolv ed problem of the long term s n -dimensional correlations, while these events are a naturally connected by the entropy path functional. The lack of additivity—even for statis tically independent events—leads to the problem related to the lac k of thermodynamic extensivity [35]. The entropy m easure of degree- α and the α -norm entropy m easure [43-44] satisfy a “pseudo-additive” relation, associated with a nonextensive thermodynamics, rather than the additive relation, provided by the Shannon and Renyi [45] entropies. The evolutionary path functional’s en tropy is defined by a simple sum of the local entropies at each DP, according to (7.1b) that is applied to an extensive dynami c system . But the extensivity is locally violated at the random window between the extremal segm ents. The evolut ionary PFR forms a ranged sum, satisfying the VP. The maximal and minimal PFR values characterize the maximal and m inimal speeds of evolutionary process according to [6]. A current PFR is defined by a sequentia l enclosure each of a previo us m odel’s eigenvalue to the following one, connected by the IN structure. This allows getting the cooperative complexity for all process [7], as well as the PFR measure at each stage of evolu tion. The IN final node’s eige nvalue characterizes both the system’s terminal ev olutionary speed and the system ’s cooperative complexity [7,8]. Algorithmic Kolmogorov’s (K) complexity [5] is m easur ed by the relative different ial entropy of one object ( k ) with respect to other o bject ( j ), which is represented by a shortest program in bits. The common entro py 34 measure connects both the K-complexity and the information macrocom plexity kj M C δ Δ [7]. So, the kj M C δ Δ complexity m easures the quantity of inform ation (transmitte d by the relative information f low), required to join the object j with the object k , which can be expressed by the algorithm of a m inimal program, encoded in the IN communication code (sec.4). This program also m easures a “difficulty” of obtaining inform ation by j from k in the transition dynamics . Assigning a common digital kj M C δ Δ measure to all communicated objects allo ws also determining the unknown constant in the K complexity [5]. The h C kj M C δ Δ maximum represents the information measure between order and disorder in stochastic dynamics and it can detect dete rminism am ongst the randomness and singularities. Because the IP F has a limited time length, and the IPF strings are finite, bein g an upper bound, the considered cooperative complexity is computable in an opposite to the Kolmogorov’s incomputability . The MC-complexity is able to implement the introduced notion and measure of information independent on the probability measure by applying the IN’s information code. The above results lead to a mutual connection of the m odel’s Uncertainty, Regularity, and Stability. Uncovering of the regular causes of a random process lays in a foundation of revealin g the process regularities. Such an opportunity provides the IPF whose inform ati on invariant encodes a chain of regular events, covered by the random process’s IPF. Therefore, the IPF m easur es the process’s uncertainty by the entropy functional and allows minimizing them by the applied optim al cont rols. The IPF’s Ham iltonian that deter mines both an instant entropy production and the process macromode l’s operator, also define s the LCE as Lyapunov’s function of the process’ stability, which conne cts the stability to the process uncertainty. The process’ optim ization by the controls’ actions changes the LCE sign at the DP bringing the cooperative process’ stability concurrently with the m inimization its uncerta inty. Because most natural processes are random, understanding of their regulari ties involves the minimization of the random uncertainties by VP, which leads to imposing the dyna mic constraints (sec.1) and getting the process information dynamic model w ith all above peculiarities. Tha t is why the constraint imposition is considered as a general method of revealing dynamic regularities of the random process and its dynamic equations. In the considered Maxwell dem on’s feedback [46], an observer first transforms a random uncertainty (events) in information (certainty, for example, e xpressed by specific probability-that is a non-random measure of the random events ), while information itself is a non-material substance . Second, to acquire this inform ation and transform it through a feedback, the observer binds this information with a source of energy, which is onl y a currier of information. Other carriers are different phy sical materials (for example, photosens itive elements, etc.). Therefor e, the described experiment shows not that inform ation contains energy, but the experiment just evaluates the energy, which the measurement devices (including sensory, brain) spend for binding the incoming information for it transmission . Actually, the transformation of random uncertainty in information by a physical observer is accompanied with binding it with the observer’s energy and/or its material substance, wh ich se rves as a carrier for the information transmission. The considered VP is a formal mathematical mechanism transforming the entropy functi onal uncertainty to the path functional (IPF) information. According to VP, this transformation requires spending a c ertain invariant quantity o f information, which each IPF extremal’s segment (a discrete interval) binds (sec.4). The VP defined invariants take the values 0.70-0.23, depending o n the interval length, where the m inimum belong to an interval of a delta-function, or generally: . At a fixed interval, the specific invariants’ values ev aluate an efficiency of binding inform ation, being concealed within the interval. Accordi ng to the VP, each invariant measures an extreme quantity of that information which depends on the ratio of imaginary and real eigenvalue for this interval. , o aa 0l n a ≤≤ 2 Following the formula [46] exponentia l form exp[ / ] B FW k T I Δ −= , where F Δ is a free-energy difference bet ween states, is a work done on the system, W I is information, -temperature, Bolzman constant, and T B k • is the average 35 of the considered ensemble, we get the same formula in logarithmic form /l B FW k T I Δ− = n , where the energy-bind information takes the value 1 I ≥ , while at 0, 1 FW I Δ− = = . Therefore, information is binding only at | || 0 , l n FW I Δ− > > 0 . According to [46], 12 I ≤≤ , which corresponds to 0l n l n 2 I ≤ ≤ and it coincides with the VP invariants. These confirm both the initial concept that information is not an energy, but it rather the energy binds information defined by the VP, and this information is precisel y evaluated by the VP invariants. References 1.Stratonovich R., L. Theory of information , Sov. Radio, Moscow, 1975. 2. Durr D., Bach A. The Onsager-Machlup Function as Lagran gian for the Most Probable Path of Diffusion Process, Communications in Mathematical Physics , 60 (2):153- 170,1978. 3. Adler S.L. and Horwitz L.P. Equilibrium Statistical Ense mbles and Structure of the Entropy Functional in Generalized Quantum Dynamics, Intern. Journal of Theoretical Physics , 37 (1): 519-529,1998 . 4. Lerner V.S. Variation Principle in Informational M acrodynamics, Kluwer-Springer Publ, Boston, 2003. 5. Kolmogorov A. N. Theory of Information and theory of algorithms , Nauka, 1987 6. Lerner V.S. The evolutionary dynamics equa tions and the inform ation law of evolution, International Journal of Evolution Equations , 2 (4), 2007. 7. Lerner V.S. Information complexity of evolutionary dynam ics, International Journal of Evolution equations, 3 (1):27- 63, 2007. 8. Lerner V.S. Building the PDE macromodel of the evolutionary cooperative dynamics by solving the variation problem for an informational path functional, International Journal of Evo lution Equatio ns 3 (3), 2008. 9. Gihman I.I., Scorochod A.V. Theory of Stochastic Processes , Vol. 2, 3, Nauka, Moscow, 1975. 10. Krylov N.V. Controlled Diffusion Processes , Springer-Verlag, New York ,1980. 11. Dynki n E.B. Theory of Markov Processes , Pergamon Press, New York,1960. 12. Prochorov Y.V, Rozanov Y.A. Theory Probabilities , Nauka, Moscow, 1973. 13. Lerner V.S. Dynamic approximation of a random information functional, J. Mathematical Analysis and Applications , 327 (1 ):494-514, 2007 . 14.Stratonovich R.L. The conditional Marcovian processes and their applications in optimal control theory , Moscow University Press, Moscow, 1966. 15. Jaynes E., T. Information Theory and Statistical Mechanics in Statistical Physics , Benjamin, 1963. 16. Cover T.M. Elements of Information theory , Stanford University Press, Stanford, 1989. 17. Freidlin M.I. and Wentzell A.D. Random Perturbations of Dynamic Systems , New York: Springer, 1984. 18. Gelfand I.,M., Fomin S.,V. Calculus of Variations , Prentice Hall, New York,1962. 19. Kolmogorov A.N., Fomin S.V. Elements of the functions and functional analysis , Nauka, Moscow, 1981. 20. Brogan W. L. Modern Control Theory , Prentice Hall, New York, 1991. 21. Shannon C.,E., Weaver W. The Mathematical Theory of Communication , Illinois Press: Urbana, 1949. 22. Bellman R. Introduction to Matrix Analysis , MCGraw, New York, 1 960. 23. Michlin S.G. Mathematical Physics , Nauka, Moscow, 1967. 24. Alekseev V.M., Tichomirov V.M., Fomin S.V. Optimal Cont rol , Nauka, Moscow, 1979. 25.Chirikov B. V. A Universal Instability of Many-Dimensional Oscillator Systems. Phys. Rep. 52 , 264-379, 1979 . 26. Lichtenberg A.I., Liberman M.A. Regular a nd Stochastic Motion , Sprin ger, 1983. 27. Hilborn R. C. Chaos an d Nonlinear Dynamics , 2nd ed. ,Oxford Univ. Press, 2000. 28. Beck C. and Schelgl F. Thermodynamics of Chaotic Systems: An Introd uction, Cambridge Univ. Press, 1993. 29. Lifshitz E.,M., Pitaevsky L.P. Physical Kinetics, Nauka, 1979. 30. Stainley H.E. Introduction to phase transform ation and critical phenomena , Oxford Univ. Press,1980. 36 31. Gross, D. H. E. Microcanonical thermodynamics: Phase transitions in ”small” systems, Lecture Notes in Physics , 66 , World Scientific, Singapore, 2001. 32. Latora V. and Baranger M. Kolmogorov-Sina i Entropy Rate versus Physical Entropy, Phys.Review Letters , 82 : 3, 1999. 33. Shell M. S. Debenedetti P. G. an d. Panagiotopoulos At. Z . Saddles in the Energy Landscape: Extensivity and Thermodynamic Formalism , Phys.Review Letters , 92 : 3, 2004. 34. Abe S. and Rajagopal A. K. Implications form invarian ce to the structure of nonextensive entropies, Phys.Review Letters, 83: 1711, 1999. 35. Nonextensive Statistical Mechanics and its Applications (eds. S . Abe , Y Okamoto), Springer, 2001. 36. Nikolis G., Prigogine I., Self-organization in None quilibrium systems , Wiley,1977. 37. Prigogine I. From being to becoming: time and c omplexity in physical sciences , Freeman, San Francisco, 1980. 38. Prigogine I. Etude Thermodynamique des Processus Irreversibles , Desoer, Liege, 1947. 39. De Groot S.,R. Mazur P. Non-equilibrium Thermodynamics , North Holland Publ., Am sterdam, 1962. 40. Dyarmati I. Irreversible Thermodynamics. The Field Theory and Variation Principles ,Mir, 1974. 41. Graham R. Stochastic Methods in Nonequilibrium Thermodynamics. In L.Arnold, R.Lefever (eds), Proceeding of Workshop : Stochastic Nonlinear Systems in Physics , Springer,1981. 42. Zaripov R.G. Self-organization а nd irreversibility in nonextensive systems , Tat. Аса d. Sci. Publ. House "Fen", Kazan, 2002. 43. Tsallis C. Brigatti E. Nonextensive Sta tistical Mechanics: A brief introduction. In Continuum Mechanics and Thermodynamic s, 16, 223, 2004 44. Tallies C. Nonextensive generalization of Boltzmann-Gi bbs Statistical mechanics and its Application s , Lectures, Inst. for Molec. Science, Okazaki, 1999. 45. Reyni A. In Proceedings of the Fourth Berkeley Symposiu m on Mathematical Statistics and Probability , Vol.1, Univ. California Press, Berleley, 1960. 46. Toyabe S., Takahiro S, Ueda M., Mune yuki E. and Sano M. Experimental demonstr ation of i nformation-to-energy conversion and validation of the generalized Jarzynski equality, Nature Physics , 2010, doi:10. 1038/nphys1 821. Figures Fig.1. The cooperati on of the m odel’s eigenvalues 37 Fig.1a. A t riplet’s in formati on structure wi th apply ing both t he regular a nd needle c ontrols . 0 a a a a a a a a t 1 t 2 t 3 t 4 t 5 t a+a~4a 2a 0 a a 0 0 0 a 0 2 2a a a 0 0 0 a+a~4a 2 a 0 0 2 2 2 0 a = = Fig.2. An illustration of the contro l’s dom ain and the auxiliary curves 38
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment