• Sonuç bulunamadı

View of Secant Method Superiority Over Newton’s Method in Solving Scalar Nonlinear Equations

N/A
N/A
Protected

Academic year: 2021

Share "View of Secant Method Superiority Over Newton’s Method in Solving Scalar Nonlinear Equations"

Copied!
9
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Turkish Journal of Computer and Mathematics Education Vol.12 No.3 (2021),

681-689

Research Article

Secant Method Superiority Over Newton’s Method in Solving Scalar Nonlinear

Equations

Ahmed Hadi Aboamemah1,2, Annie Gorgey1∗, Noorhelyna Razali3, Mohd Asrul Hery Ibrahim4

1Universiti Pendidikan Sultan Idris, 35900 Tanjong Malim, Perak, Malaysia 2Educational Directorate of Kufa, Al-Iskan Bridge, Kufa 54003, Iraq 3Unit of Fundamental Engineering Studies, Faculty of Engineering

Built Environment, The National University of Malaysia, 43600 Bangi, Malaysia

4Faculty of Enterpreneurship and Business, Universiti Malaysia Kelantan, Kelantan, Malaysia

annie_gorgey@fsmt.upsi.edu.my*1

Article History: Received: 10 November 2020; Revised: 12 January 2021; Accepted: 27 January 2021;

Published online: 05 April 2021

Abstract: This study is aimed to pinpoint the cases or situations that makes the performance of secant method superior

than the performance of New- ton’s method in solving certain selected nonlinear equations. Despite the convergence order of Newton’s method that is higher, that does not necessarily mean it performs faster than secant method for all nonlinear problems. In addition, both methods are almost identical in their approach and secant method has advantages that can make its performances superior or equivalent to Newton’s method for some nonlinear equations. Four numerical experiments are given to clarify the situations in which the performance of secant method are proven to be superior over Newton’s method.

Keywords: Newton, Secant, scalar nonlinear equation, root finding. AMS Subject Classiffication: 65L05, 65L06.

1. Introduction

Solving nonlinear equations by the standard iterative methods such as secant and Newton methods to obtain their approximate solution is one of the classical science problems. Choosing or determining the most appropriate method that perform the fastest to solve a given equation is not an easy matter. Because all the iterative methods are different in their convergence order and approach, it is well known that the method that has higher convergence order, will perform much faster than the other methods. However, this is not true for all nonlinear equations or that does not necessarily mean the generated sequence reached the desired root at a very least time. The essential things that play the main role in knowing or measuring the performance of an iterative method are convergence order, number of iterations, number of functions evaluations and the accuracy of the desired root. Since all the different iterative methods perform differently for all nonlinear equations, then the method that gives the fever computational time with the most root accuracy is considered the most efficient than other iterative methods.

Another consideration is that has a direct impact on the convergence of the iterative methods is the determination of the interval for an approximate choice of an initial guess value [1, 2, 4]. Secant and Newton methods in solving the nonlinear function f : R R, and f (x) = 0 where x is a single root in the interval [a, b] are considered in this study. This article is written to identify the cases in which the performances of secant method can be exceeded or equivalent to Newton’s method despite that Newton’s method surpasses the secant method by the most important element which the method has quadratic convergence order. Even though Newton’s method requires only fewer iterations than secant method but it is still possible for secant method to out perform Newton’s method because this feature is offset by another advantages in secant method such that secant method requires lower evaluations of function unlike Newton’s method that requires the evaluate the function and its first derivative at each iteration and then the second reason is secant method usually behind Newton’s method by one or two iterations. As we know secant method uses secant line to generate its sequence whereas Newton’s method uses the tangent line. The approach of secant and Newton methods are almost identical despite that secant method is earlier than Newton’s method by more than 3000 years [5]. This identical or similarity between both methods and various characteristics, can make secant method performs faster than Newton’s method. However, the most crucial question is what are the cases or when does secant method perform faster than Newton’s method?. Is the complexity of the com- posite function on one of the cases in which the secant method can out perform the Newton method?, or are there some situations for the form of the function curve in which secant method performance exceeds or equivalents to Newton’s method?.

In order to answer these questions, few nonlinear equations are chosen based on the combinations of two or three functions. The numerical experiments are performed for these types of nonlinear equations by secant and Newton methods using Scilab programming language. Choosing the an initial approximation for both methods is

(2)

the most critical thing so that equal comparison can be made between secant and Newton methods. In addition to this, there are two ways that can be used to choose an appropriate initial approximation. Firstly, by the Intermediate Value Theorem and secondly by plotting the graph of that particular function. Since one of the objective of this study is to compare the performances of the Newton and secant methods which is according to the form of the function curve that corresponds to the approximate values of the root, we will use both ways such as by the Intermediate Value Theorem and the plot of the function. So the endpoints of the interval that contains a root within are considered as appropriate initial approximations for secant method while the one will be considered as an appropriate initial approximation for Newton’s method.

A brief explanation on Secant and Newton’s method is given in Section 2. In Section 3 we present and discuss the results of our numerical experiments for four nonlinear equations and the final section concludes the findings.

2. The Iterative Methods Newton’s Method

Newton-Raphson method or in brief Newton’s method is a very renowned open iterative method and the most used algorithm for its simplicity and quadratic convergence order. It requires only one initial approximation [6, 9]. There are many ways to deduce the geometrical approach of Newton’s method. The basic one is to approximate an appropriate initial guess value x0 to the required root by a tangent line to the function f

(x) at the point (x0, f (x0)). The tangent line that intersect the x−axis is to find the first enhanced approximation x1 to the root.

The intersection point (x1, 0) can be found by calculus tools as long as f J(x0) ƒ= 0, a s r e s ulte d i n t he fol lo wi ng

fo r mul a :

This procedure can be repeated with x1 as a new approximate value to get x2, and so on till it reached the

desired root with an acceptable accuracy. The Newton method can be written in a general form as

In general, Newton’s method converges quadratically to the simple roots and linearly to the multiple roots of equations, as long as an appropriate value has been chosen as the first approximation [8]. Nevertheless, that is depending on the equation under consideration and the selected value as an initial approximation. There are some cases in which Newton’s method does not converge successfully such as

a. when the value of the function slope at any approximate value that generated by Newton iteration or choosing an initial approximation that is either zero i.e f J(x

n) = 0 or very small. For the first case (f J(xn) = 0) means the tangent line will not pass or cross x-axis thus the approximate value will be unknown. The latter case is the small value of the function slope means the tangent line will cross x-axis far away from the current approximation and it may possibly converge to a root far away from the required one or never converge to any root.

b. Newton’s method failed to converge to the required root if the initial approximation is far from this root. c. Newton’s method is not guarantee to converge to all roots i.e it can be divergent. A typical example of this case is the function f (x) = 3 x. Even though the generated sequence by Newton’s method converges but there is no guarantee it converges to a solution or to the desired root [7, 13].

d. in some cases, Newton iterations oscillate without converging to the root.

Secant Method

Secant method is one of the most well-known numerical algorithm that is used to find a numerical approximation of solution of nonlinear equations f (x) = 0. Secant method is an open iterative method although it

(3)

·

f (xn−1)) and (xn, f (xn)) to intersect x-axis in xn+1 which is consider the next enhanced approximation to the desired root. To a large extension, the secant method is similar to the Newtons method, geometrically and constructional, where in Newton’s method, the intersection point of the tangent line and x-axis represents the next approximation for the root, whereas in secant method the intersection point of the secant line and x-axis represents the next approximation for the root [10].The recurrence formula of secant method almost like the iteration formula of Newton’s method. The iteration secant formula is obtained by replacing the first derivative of the function at xn ((f J(xn)) in the iteration of Newton formula by the slope or mean absolute rate f (xn)−f (xn−1 ) .

xn−xn−1

The general recurrence formula for secant method [11, 5] is given by

This similarity between secant and Newton methods does not mean that secant method derivation based on Newton’s method or the opposite, where secant method and secant-like methods back to the rule of Double False Position and they are anticipated Newton’s method by more than 3000 years [5].

For the sequence xn that is generated by secant method as the approximate values to the root of an equation, converges super-linearly to the simple roots provided that f J(x) 0 where x is the actual root and it

loses the super-linear convergence property when the multiplicity of the root is bigger than one to converge linearly [11, 3]. Secant method has two characteristics that distinguish it with Newton’s methods. The first one it is derivative-free i.e it does not require to evaluate the first derivative of the function and the second one it requires only one function evaluation for each iteration except the first one that requires two function evaluations f (x0) and f (x1) since the later iterations use f (xn) that already evaluated as f (xn−1) in the next iteration [12].

The following theorem is used a lot in this article, therefore it is important.

Theorem 1. The Intermediate Value Theorem (IVT) for a real-valued function f (x), continuous and defined for

all x over a closed interval [a, b], such that f (a) f (b) < 0 then the open interval (a, b) contains at least one point c for f (c) = 0, where c is one of the roots of the function f [14].

Numerical Experiments

In order to determine the situations in which the performances of secant method exceeds or equivalents to Newton method, the computational tests will perform on four nonlinear equations that is chosen based on the combinations of two or three functions. Although many other nonlinear equations have also been studied, however these four nonlinear equations are chosen for the sake of this article since they give interesting results. Determining the interval that contains a root within will be based on the Intermediate Value Theorem, where the endpoints are appointed as convenient initial approximations for both methods. The tolerance error to check the convergence condition is determined by TOL = 10−10 and is used as the stopping criteria such that xn+1 xn TOL. The performance of secant and Newton’s methods will be compared by taking the number of iterations (IT), root accuracy (f (xn)), function evaluations (FE) and computational time (CPU) (in seconds). The CPU time is measured in Scilab using the timer() function.

Equation 1

x3 − tan(x2 − 7) = 0. (1)

Equation (1) has infinitely many roots and the root that is located within the interval [2.3, 2.32] that achieve IVT condition is considered for the comparison of the performances of secant and Newton methods. The graph of equation (1) is given in Figure 1. The numerical results are shown in Table 1 and Table 2 by the Secant and Newton’s method respectively.

(4)

Figure 1. Graph shows the zeros of the function x3 − tan(x2 − 7) = 0

Table 1. The desired root by the Secant method, x8 = 2.3126900191040676 at 0.0020177 seconds for Equation (1)

Based on Table 1 and Table 2, it can be shown that both Secant and New- ton methods requires at least 8 iterations to give the desirable root. Secant method converge to x8 = 2.3126900191040676 for 0.0020177

seconds to get f (x8) = 1.386 × 10−13. On the other hand, Newton’s method converge to x8 =

2.3126900191040680 and requires 0.0064737 seconds for f (x8) = −1.279×10−13.

Table 2. The desired root by Newton’s method, x8 = 2.3126900191040680 at 0.0064737 seconds for Equation (1) IT. xn xn+1 f (xn+1) 1 2.30000000 2.3225492352328638 -16.0569501020373980 2 2.32254924 2.3183059329613620 -5.8164298915414232 3 2.31830593 2.3145222553114593 -1.4292585442272063 4 2.31452226 .2.3128864200139305 -0.1383466410280985 5 2.31288642 2.3126922839851929 -0.0015772110321119 6 2.31269228 2.3126900194054012 -0.0000002098136527 7 2.31269002 2.3126900191040680 -0.0000000000001279 8 2.31269002 2.3126900191040680 -0.0000000000001279

The fundamental factor in determining the performance of the iterative method is iterations number but if they reached the root at the same iterate or the difference is little then the other factors will play a role in determining the most perform method. It is noticed that both methods secant and Newton cost 8 iterations to reach the root but secant method performs faster. For the iterations number cost, usually, Newton’s method is lower cost in addition to this if the initial approximation is close enough to the desired root then the iterations number does not

IT. xn−1 xn xn+1 f (xn+1) 1 2.30000000 2.32000000 2.3072407824302519 2.86968260 2 2.32000000 2.30724078 2.3103615245764697 1.42573959 3 2.30724078 2.31036152 2.3134429243127470 -0.54847247 4 2.31036152 2.31344292 2.3125868547157973 0.07139900 5 2.31344292 2.31258685 2.3126854598543942 0.00317368 6 2.31258685 2.31268546 2.3126900467376412 -0.00001924 7 2.31268546 2.31269005 2.3126900190966668 0.00000001 8 2.31269005 2.31269002 2.3126900191040676 0.00000000

(5)

conclude that if the form of the function curve at the convergence scope is a straight line then Newton’s method requires more iterations even though the initial approximate value is very near to the desired root. In such a situation like this, the secant method does not be affected much, where it is identical with Newton’s method in iterations number cost and it per- forms faster than Newton’s method because it requires less number of function evaluations.

Equation 2

(2)

Equation 2 is a combination of natural logarithmic and polynomial functions and according to Figure 2 and IVT it has three real roots situated within the intervals [ 1, 0], [0, 0.3] and [1.386, 1.4]. The root that is situated within [0, 0.3] is used as initial solutions in solving Equation 2 by the secant and Newton’s methods.

Figure 2. Graph shows the zeros of the function ln .x3 − 2x1/3 + cos(2x) + +2x = 0. Table 3. The desired root by the Secant method, x8 = 0.0193881052728025 at 0.0024948 seconds for Equation (2) IT xn−1 xn xn+1 f (xn+1) 1 0.0000000 0.3000000 0.0295760127607170 .-0.06894057 2 0.30000000 0.02957601 0.0244519659521298 -0.0363050476352078 3 0.02957601 0.02445197 0.0187517726779325 0.0049363336874456 4 0.02445197 0.01875177 0.0194340499242796 -0.0003527063836969 5 0.01875177 0.01943405 0.0193885513958130 -0.0000034271379159 6 0.01943405 0.01938855 0.0193881049629548 0.0000000023802812 7 0.01938855 0.01938810 0.0193881052728046 -0.0000000000000160 8 0.01938810 0.01938811 0.0193881052728025 -0.0000000000000001

Based on Table 3 and Table 4, it can be shown that Secant method requires at least 8 iterations to give the desirable root. Secant method gives the root x8 = 0.0193881052728025 at 0.0024948 seconds and for the root to

satisfy the convergence rate when f (x8) = −6.245 × 10−17. On the other hand, New- ton’s method requires at

least 9 iterations for the method converge to x9 = 0.0193881052728025 and requires 0.0047762 seconds to get f

(x9) = −7.633 ×10−17.

Table 4. The desired root by the Newton’s method, x9 = 0.0193881052728025 at 0.0047762 seconds for Equation (2) IT xn xn+1 f (xn+1) 1 0.3000000 0.2784754391379510 .-2.1854000378783782 2 0.27847544 0.2161568511617479 -1.0918272965925160 3 0.21615685 0.1036173432376764 -0.4083272067755032 4 0.10361734 0.0085398355113373 0.1042226125815470 5 0.00853984 0.0167747837391839 0.0209370933544190 6 0.01677478 0.0192763128125396 0.0008602578715080 7 0.01927631 0.0193879156039043 0.0000014570593959

(6)

8 0.01938792 0.0193881052722584 0.0000000000041804

9 0.01938811 0.0193881052728025 -0.0000000000000001

Before going into the details of convergence of the considered iterative methods, the interval that contains a root within cannot be always easy to determine based on only by plotting the graph of the function. It needs to make sure that its endpoints achieve IVT condition and vice verse. For the sequence that is generated by the secant method to converge to the root 0.0193881052728025 requires 8 iterations whereas Newton’s sequence requires 9 iterations. We can see that for this particular case secant method converges faster than Newton’s method. The important thing here is determining the situation in which secant method performs faster or knowing the reason that stands behind this excellence performance for secant method although its convergence order is less than Newton’s method. So the form of the function curve that corresponds to the approximate values as given in Figure 2 needs to be looked at and trace the values of the convergent sequences that generated by the secant and Newton methods to observe the difference between them. It is noticed that there is a deflection on the function curve at the convergence scope and the convergent sequence of secant method converges positively to the desired root from the beginning with- out oscillation unlike the values of the Newton sequence that begins to converge positively at the fifth iterations. This deflection gives secant method a superiority because secant method is not affected by that deflection, unlike Newton’s method. This different in convergence between secant and Newton methods is also due to the difference in their approach. Newton’s method uses the tangent line that its direction depends on one approximate value whereas secant method uses secant line that its direction depends on two approximate values. Secant method does not be affected at all as long as the initial approximations x0 and x1 are

at the endpoints of the interval that achieves IVT condition.

Equation 3

ln(x2 + 1) − e0.4x cos(πx) = 0.

(3)

Equation 3 is a combination of logarithmic, exponential and trigonometric functions. The roots of Equation (3) locate within n and n+1 where n = 1, 0, 1, 2....in which there is a root between each two successive integer numbers n and n+1.

Solving the equation for two roots is presented next on which the performance of the iterative methods, Secant and Newton’s methods are compared. The first root lies in [0, 1] while the second root lies in [9, 10] as shown in Figure 3.

Figure 3. Graph shows the zeros of the function ln(x2 + 1) − e0.4x cos(πx) = 0

Table 5. The desired root by the secant method, x7 = 0.4506567478899357 at 0.0013121 seconds for Equation (3)

IT xn−1 xn xn+1 f (xn+1)

1 0.00000000 1.00000000 0.3139745147655032 -0.53152274 2 1.00000000 0.31397451 0.4482056461021560 -0.0107481365682272 3 0.31397451 0.44820565 0.4509760087898803 0.0014025733452157

(7)

7 0.45065675 0.45065675 0.4506567478899357 -0.0000000000000001

Based on Table 5 and Table 6, it can be shown that Secant method requires only 7 iterations to give the desirable root. Secant method gives the root x7 = 0.4506567478899357 at 0.0013121 seconds such that when f

(x7) = - 1.110 X 10−16. On the other hand, Newton’s method requires at least 9 iterations for the method converge

to x9 = 0.4506567478899358 and requires 0.0020441 seconds to get to f (x9) = 1.665 × 10−16. Table 6. The desired root by the Newton method, x9 = 0.4506567478899358 at 0.0020441 seconds for Equation (3) IT xn xn+1 f (xn+1) 1 0.00000000 -2.5000000000000000 1.9810014688665831 2 -2.50000000 -1.4265090326639478 1.2395204883419342 3 -1.42650903 -2.9018886310086733 2.5414237630293899 4 -2.90188863 0.2940645162222948 -0.5950696622921750 5 0.29406452 0.4866597170291678 0.1616565231805711 6 0.48665972 0.4513632168332483 0.0031044525137830 7 0.45136322 0.4506570801332679 0.0000014592968201 8 0.45065708 0.4506567478900095 0.0000000000003239 9 0.45065675 0.4506567478899358 0.0000000000000002

Solving Equation (3) for the root in [0, 1] reveals preponderance for secant method over Newton’s method in two conditions. The first one is, secant method converges to the desired root positively with fewer iterations and less computational time than Newton’s method and the second one is, it is guaranteed to converge to the root when there is no oscillations with whatever initial guess values provided that within the interval [0, 1]. This speech is identical to all roots of Equation (3) to demonstrate the importance of secant method in such a situation as in Equation (3). If one look at the function curve will observe it contains so many minimum and maximum points, where the value of the function slope is either zero or very small. In such case, then the choice of the initial guess value for Newton’s method is critical and Newton’s method con- verges to the required root with tottering at the first three iterations because the route of the tangent line is influenced by the curvature that corresponds to x0 = 0 to intersects x-axis a bit far from the current point at x1 = 2.5. To overcome this tottering,

need to avert to choose the values that correspond to the maximum and minimum points as an initial approximation for Newton’s method. This excellence for secant method comes from the fact that it needs two initial approximations that make the route of the secant line is tied or restricted without being affected by any deflection or curvature on the function curve that corresponds to the initial approximations provided that any two successive approximations are not lying on that deflection or curvature together at one iteration.

Equation 4

ex2 − 3ex+1 cos(2πx) = 0. (4)

Equation4 and Equation3 have almost similar description, where both equations have maximum and minimum points but the difference is equation (3) has only one root between each successive integer numbers within [ 1, - , ∞ ) while there are two roots between each successive integer number within [ -1, 2] in equation (4).The root that is located within [ - 0.5, 0] has been chosen for solving Equation (4) by the secant and Newton’s methods.

(8)

Table 7. The desired root by the Secant method, x5 = -0.2242065044038205 at 0.0017638 seconds for Equation (4) IT xn−1 xn xn+1 f (xn+1) 1 -0.40000000 .0.00000000 -0.2244528618373598 0.01032952 2 0.00000000 .-0.22445286 -0.2241292842515177 -0.0032382395200623 3 -0.22445286 .-0.22412928 -0.2242065130545776 .0.0000003627581957 4 -0.22412928 .-0.22420651 -0.2242065044041232 0.0000000000126927 5 -0.22420651 .-0.22420650 -0.2242065044038205 0.0000000000000016

Table 8. The desired root by the Newton method, x7 = -0.2242065044038205 at 0.0023047 seconds for Equation (4). IT xn xn+1 f (xn+1) 1 -0.40000000 -0.0650089179245102 -6.0086830938529934 2 -0.06500892 -0.2942221455849130 2.7571097395127540 3 -0.29422215 -0.2169683018716153 -0.3044198408878596 4 -0.21696830 -0.2241876886633248 -0.0007890202312599 5 -0.22418769 -0.2242065042433090 -0.0000000067308428 6 -0.22420650 -0.2242065044038205 0.0000000000000016 7 -0.22420650 -0.2242065044038205 0.0000000000000002

Based on Table 7 and Table 8, it can be shown that Secant method requires only 5 iterations to give the desirable root, x5 = - 0.2242065044038205 at 0.0017638 seconds for the root to satisfy the convergence rate

when f (x5) = 1.554 X 10−15 if compared with Newton’s method that requires 7 iterations for the method

converge to x7 = −0.2242065044038205 at 0.0023047 seconds and f (x7) = 2.220 × 10−16.

The numerical results for the root of Equation (4) that is located within [0.5, 0] in both Tables 7 and 8 confirm that secant method can perform faster than Newton’s method based on the cost of computational time. Moreover, secant method exceeds Newton’s method by two iterations, despite averting of choosing the values that correspond to the maximum and minimum points as initial approximation for Newton’s method. Performance of Newton’s method in solving Equation (4) is slower in compared with solving of Equation (3) although both equations have a similar description and the opposite of that is achieved for secant method. Based on the performances of both methods, secant and Newton in solving Equations (3) and Equation (4). Their roots are located between two values correspondent to maximum and minimum points. We can conclude that whenever the distance between successive roots was small then secant method will perform faster than Newton’s method. On the other hand, for Newton’s method, whenever the distance was smaller, the function curve becomes semi- vertical to the x-axis and in such cases Newton’s method requires more iterations as noticed in Equation (1). Furthermore, it is noticed that both points of the initial approximations for secant method are situated on the straight line part of the function curve that contains the point of the desired root and this can be considered is the main reason for secant method excellence. In such a case is not only secant method excellence is achieved, but it is considered the most guaranteed to find the desired root.

3. Conclusion

In this study, we had clarify some situations in which the performance of secant method is superior than Newton’s method although Newton’s method converges faster than secant method in general. We determined three situations for the form of the function curve in such can secant method superior to Newton’s method such as a) if the function curve is a straight or semi-straight line at the convergence region, b) if there is a deflection on the function’s curve at the approximate values, and c) if the function’s curve has maximum and minimum points, the convergence of both methods secant and Newton’s are subjected to the guess values that is chosen as an initial approximation and d) the form of the function curve that corresponds to the convergence region. Based on the numerical results, the convergence of Newton’s method is more affected than the convergence of the secant method. Finally, the choice of the initial approximation is critical for both methods and the endpoints of an interval can always be chosen of as the initial approximations for the secant method provided that they achieve the IVT condition.

(9)

References

1. Calder´on, G. and Villamizar, J. M and Carrillo, J.C.E. (2019). Real effective- ness of iterative method for solving nonlinear equations, Journal of Physics: Conference Series 1159(1): 012015.

2. Ehiwario, J. and Aghamie, S. (2014). Comparative Study of Bisection, Newton-Raphson and Secant Methods of Root-Finding Problems, IOSR Journal of Engineering (IOSRJEN) 4(04): 2278–8719. 3. Vianello, M. and Zanovello, R. (1992). On the superlinear convergence of the secant method, The

American mathematical monthly 99(8): 758–761.

4. Adhikari, I. (2017). Interval and Speed of Convergence on Iterative Methods, Himalayan Physics: 112– 114.

5. Papakonstantinou, J. M. and Tapia, R. A. (2013). Origin and evolution of the secant method in one mension, The American Mathematical Monthly 120(6): 500–517.

6. McDougall, T. J. and Wotherspoon, S. J. (2014). A simple modification of Newton’s method to achieve convergence of order 1+ 2, Applied Mathematics Letters 29: 20–25.

7. Horton, P. (2007). No fooling! Newton’s method can be fooled, Mathematics Magazine 80(5): 383– 387.

8. Nghiêm, H. (2021). Problems in general K-theory. Mathematical Statistician and Engineering Applications, 70(2), 120-132.

9. Polyak, B. T. (2007). Newton’s method and its use in optimization, European Journal of Operational Research 131(3): 1086–1096.

10. Ramos, H. and Vigo-Aguiar, J. (2015). The application of Newton’s method in vector form for solving nonlinear scalar equations where the classical New- ton method fails, Journal of Computational and Applied Mathematics 275: 228–237.

11. Kumar, R. and Vipan. (2015). Comparative Analysis of Convergence of Various Numerical Methods, Journal of Computer and Mathematical Sci- ences 290–297.

12. D´ıez, P. (2003). A note on the convergence of the secant method for simple and multiple roots, Applied mathematics letters 16(8): 1211-1215.

13. Nijmeijer, M. J. (2014). A method to accelerate the convergence of the secant algorithm, Advances in Numerical Analysis 2014.

14. Akram, S. and Ann, Q. U. (2015). Newton Raphson method, International Journal of Scientific & Engineering Research 6(7).

15. Rojer, L. (2021). On the characterization of paths. Mathematical Statistician and Engineering Applications, 70(2), 153-162.

16. Chhabra, C. (2014). Improvements in the bisection method of finding roots of an equation, 2014 IEEE International Advance Computing Conference (IACC) 11–16.

Referanslar

Benzer Belgeler

Yallllzca birinci yll izlem sonu~lan bile (Gamma-Knife'm tedavi bai;&gt;anSl i~in standart ol~um suresi 2. ylldlT.) Gamma-Knife tedavisinin konvansiyonel fraksiyone

maddesinde yer alan aile hayatının korunması teminatının evlilik içi aile ve aile yaşamı için olduğu kadar evlilik dışı aile ve aile yaşamı için de geçerli

Buna kar:?lltk Lloyd ve arkada:?lan (4) 12 olguluk etmoidal mukosel seri- lerinde posterior etmoid kaynakh mukosele rastlama- dllar. Onlara gore posterior etmoid mukoseli sfenoid

Ama, Safiye Ayla, 40 yıl öncesinin eğlence ha­ yatını bana anlattı.. Hem de “40 yıl öncesi

[r]

Bu çalışmada ilk olarak Avrupa Birliği’nin tarihinden, yapmış olduğu anlaşmalarla ilerleme ve genişlemesinden, Türkiye– Avrupa Birliği ilişkilerinin tarihi

The research focuses on these measures that led to a change in the urban space organization at the street level as well as the performance and characteristics of social and

In conclusion, using stocks traded at Borsa İstanbul for during January 2002 to December 2014, it is concluded that there is statistically significant and negative effect of