2008 International Conference on Computational Intelligence and Security
Download PDF

Abstract

A novel algorithm for solving nonlinear equations is proposed. The computation is carried out by simple gradient descent rule with adaptive variable step-size. In order to make the algorithm be absolutely convergent, its convergence theorem was presented and proved. The convergence theorem indicates the theory criterion selecting the magnitude of the learning rate ¿ . Some specific examples show the application of the method. The results illustrate the proposed method can solve effectively nonlinear equations at a very rapid convergence and very high accuracy. Furthermore, it has also the added advantage of being able to compute exactly nonlinear equations.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles