2008 International Conference on Computational Intelligence and Security
Download PDF

Abstract

A neural-network algorithm for solving a set of nonlinear equations is proposed. The computation is carried out by simple gradient descent rule with variable step-size. In order to make the algorithm be absolutely convergent, its convergence theorem was presented and proved. The convergence theorem indicates the theory criterion selecting the magnitude of the learning rate. Some specific examples, using nonlinear equations with multi-variable, show the application of the method. The results illustrate the proposed method can solve effectively nonlinear equation systems at a very rapid convergence and very high accuracy. Furthermore, it has also the added advantage of being able to compute exactly nonlinear equation systems.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles