Logarithmic Barrier Optimization Problem Using Neural Network
The combinatorial optimization problem is one of the important applications in neural network computation. The solutions of linearly constrained continuous optimization problems are difficult with an exact algorithm, but the algorithm for the solutio…
Authors: A. K. Ojha, C. Mallick, D. Mallick
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 12 LOGARITHMIC BARRIER OPTIMIZATION PROBLEM USING NEURAL NETWORK A. K. Ojha, C. Mallick, D. Mallick Abstract — The combinatorial optimization problem is one of the important applications in neural network computation. The solutions of linearly constrained continuous optimization problems are difficult with an exact algorithm, but the algorithm for the solution of such problems is derived by using logarit hm barrier function. In this paper we have made an attempt to solve the linear constrained optimization problem by using general logarithm barrier function to get an approximate solution. In this case t he barrier parameters behave as temperature decreasing to zero from sufficiently large positive number satisfying convex ity of the barrier function. W e have developed an algorithm to generate decreasing sequence of solution converging to zero limit. Index Terms — Barrier function, weight matrix, barrier parameter , Lagrange multiplier , s-t max cut problem, descent direction. —————————— —————————— 1 I NTRODUCTION et us consider graph G= (V,E) where V=(1,2,...n) is the set of nodes and E is the set of edges. The edge between two nodes and is defined by (i,j).Let (W ij ) n×n is symmetric weight matrix such that (1 .1) Given two nodes s and t, if s0 ,the first order necessary optimality condition of (2.1) says that if is a minimum poi nt of (2.1) then there exists a Lagrange multiplier λ satisfying ( 2.5) For or t we have (2.6) and for i=s or t We derive Let (2.7) For i=1, 2,…n and from the above discussion it shows that is a descent direction of if .Before proving our main theorem we will prove the following lemma. 2.1. Lemma Assume that (i) If then (ii) If then (iii) If then (iv) If then (v) If and then Proof: Before proving these lemma we need to prove that if Let When = = = (2.8) Similarly (2.9) JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 15 For it is clear that and that for any point can only be zero, negative or positive. Case 1: Considering then From , we obtain Thus So, and Case II: Let From Using (2.8) and (2.9) we have >0 Case III: If we have JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 16 Thus Therefore Using this result the rest of this Lemma can be proved similarly. Now if we will consider all Since then satisfies the desired property and when searching for a point in , the constraint is always satisfied automatically if the step le ngth is a number between . Let Given any point with to become a feasible descent direction as and we want to solve (2.10) is a solution of (2.10) Since is bounded hence is bounded. Using the feasible descent dir ection and for updating Lagrange multiplier λ we have developed an algorithm for approximating a solution of subject to Let be any given sequence of p ositive numbers such that and .The value of should be sufficiently large so that is convex over . Let be an arbitrary non zero interior point of B satisfying . For m=1, 2,… starting at , we can generate the feasible descent direction to search for better feasible interior point satisfying 3 Main Results Using the above lemmas we ca n prove the following main theorem. Theorem 1. For every limit point of generated by is a stationary point of subject to Proof : be any given sequence of positive numbers such that a n d .The value of should sufficiently large so that is convex over . Let be an arbitrary non zero interior point of B satisfying for k=1,2,… starting from ,we derive the feasible descent direction , if When β is sufficiently small so that a feasible solution which every component equal to either -1 or 1 can be generated by rounding off . Let Satisfying JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 17 Let with Since and are bounded then is bounded. Thus For any and Therefore no limit of is equal to either -1 or 1 for i=1,2,…n From the above lemma we obtain is a feasible solution of subject to Let and for any Let = Now we will prove A(x) is closed at every point Let be any arbitrary point of Let be a sequence convergent to and ),r=1,2,…n be a convergent sequence converges to . To prove that ) is closed, we need to show that ). From and we have and , where is continous. Thus converges to as .Since then there exists a number satisfying From , we obtain And as, with , = + , since we have h( It implies that h ( ) + ( ) For any , which pr oves that h ( ) = min + ( ) According to the definition of A (x), it follows that Then by Bolzonoweierstrass theorem we can extract a convergent subsequence from the sequence Let be a convergent subsequence of the sequence Let be the limit point of the sequ ence. Clearly as converges to , since is continuous and Considering the sequence and by (2.10) We can write and According to definition of we have are bounded, then there exist a convergent subsequence. If and is closed then and we have Which contradicts that converges as . The use of logarithmic barrier fun ction finds a minimal point for solving linearity constrained continuous optimization problem to s-t max cut pr oblem. JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 18 4 Numerical Example In order to establish the effectiveness and efficiency of the algorithm for obtaining opti mization, we have solved by using MATLAB. For our solution we have consider = 0.000001 1- S min /2 with S min being the mini mum eigen value of W- I. In our computation we have taken following variables. NI=No. of iteration OBJM =Objective value of (1.4) OBJU =Greatest integer value of Subject to trace 0, RATIO= (OBJU-OBJM)/OBJU In the computation we have considered weights are the random integer between 1 to 50 and the results computed are given in the following table for the value of m=.6 Table for the numerical result s for m=.6 No of Test NI OBJM OBJU RATIO 1 100 35670 35798 .03 2 150 36150 37630 .02 3 200 46260 47350 .02 4 250 20350 21450 .02 5 300 35672 36505 .02 From the above computation it shows that the ratio is nearly equal to .02 and it pr oves the convergence of the solution to the local minimu m point and shows that the algorithm is an effective one. 5 Conclusion In this paper we have taken a general logarithmic barrier function to solve the contin uous opti mization problem by transforming it in to s-t max-cut pr oblem and developed an algorithm is based on the barrier generalize d logarithm the barrier functi on. The algorithm developed for generating decreasing sequence of solution wh ich converges at least a local minimum point of (2.1) with every component equal to either . REFERENCES [1] S. Aiyer, M.Niranjan and F.Fallside:”A theoretical investigation into the performance of the Hopfield model”. IEEE Transaction on Neural Networks, I, 204-205, 1990. [2] S.J.Benson, Y.Ye and X.Zhang :”Mixed linear and semi definite programming for combinatorial and quadrati c optimization”, Preprint,1998. [3] Bout and Miller:”Graph partitioning using annealed networks”. IEEE Transaction on Neural Networks, I, 192- 203, 1990. [4]A. Cichocki and R. Unbehaunen:”Neural networks for optimization and signal processing”. New York, Wiley,1993. [5] C. Dang : “Approximating a solution of the s-t max cut problem with a deterministic an nealing algorithm ”. Neural Networks 13, p. 801-810, 2000. [6] R. Durbin and D. Willshaw :”An a nalogue approach to the travelling salesman problem using an elastic network method”. Nature 326,689-691, 1987. [7] A. Gee, S. Aiyer and R. Prager: “An analytic framework for Optimizing Neural Net works”. Neural Networks 6, 79-97, 1993. [8] A. Gee and R. Prage:”Polyhedral combinatories and Neural Networks”. Neural Computation 6,161-180, 1994. [9] S. Gold and A. Rangarajan:”Soft assign versus soft bench-marking in combin atorial optimization”. Advance s in Neural information processing systems 8, 626-632. (MIT press: Cambridge, M.A.), 1996. [10] J.Hopfield and D. Tank: Neural Computation of decision in optimization problems. Biological cybe rnetics, 52, 141-152, 1985. JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN : 2151-9617 HTTPS://SITES.GOOGLE.COM/S ITE/JOURNALOFCOMPUTING/ 19 [11] A.Rangarajan, S.Gold and E.Mjolsness : A novel optimizing network architecture with applications. Neural computatio n, 8 , 1041-1060, 1996. [12] P. Simic: Statistical mechanics as the underlying theory of elastic and neural Optimizations.Networks, 1, 89-103, 1990. [13] K. Urahama : Gradient projection network, analog solver for linearly constrai ned nonlinear programming. Neural Computatio n 6, 1061-1073, 1996. [14] J. Van den Berg: Neural relaxation dynamics. Ph.D. thesis Erasmus University of Rotterdam , The Netherlands, 1996. [15] E.Wacholder , J.Han an d R.Mann : A Neural network algorithm for the multiple traveling sal esman problem, Biological cybernetics 61, 11-19, 1989. [16] F.Waugh and R.Westervelt : Analog neural networks with local competition: I dynamics and stability physical Review E.47, 4524-4536, 1993. [17] W.Walfe, M.Parry and Ma cmillan’s : Hopfield style Neural Networks and the TSP, Proceedings of the 7th.IEEE International Conference on Neural Networks. New York: IEEE press (pp.-4577-4582), 1994. [18] L.Xu : Combinatorial optimization neural nets based on a hybird of Lagrange and transformation app roaches. Proceeding of the world congress on Neural Networks, San Diego, CA. (pp.-399-404), 1994. [19] A.Yuille and J. Kosowsky : Statistical physical algorithms that converges. Neural C omputation, 6, 341- 356, 1994. Dr.A.K.Ojha: Dr A.K.Ojha received a Ph.D (Mathematics) from Utkal University in 1997.Currently he is an Asst. Prof. in Mathematics at I.I.T Bhubaneswar, India. He is performing research in Ne ur al Network, Genetical Algorithm, Geometric Programming and Part icle Swarm Optimization. He is served more than more than 27 years in different Govt. colleges in the state of Orissa. He is published 22 research pape r in different journals and 7 books for degree students such as: Fortran 77 Programming, A text book of M odern Algebra, Fundamentals of Numerical Analysis etc. Dr.C.Mallick: Dr.C.Mallick received a P h.D (Mathematics) from Utkal Univ ersity in 2008.Currently he is an Lecturer in Mathematics at B.OS.E Cuttack, India. He is performing research in Neural Network. He is published 3 books for degree students such as: A Te xt Book of Engineering Mathematics, Interactive Engineering Mathematics etc. D. Mallick: Mr.D.Mallick received a M.Phil (Mathematics) from Sambalpur University in 2002.Currently he is an Asst. Prof. in Mathematics at Centurion Institute of Techno logy, Bhubaneswar, India. He is performing research in Neural Network and Optimization Theory. He is served more than more than 6 years in different Colleges in the state of Orissa. He is published 6 research paper in different journals.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment