A popular technique for approximating the square root of a number is to use the tangent line approximation. Say we wish to find , and we know that the square root of a neighbouring number is , then we would use the approximation: . This approximation is called the tangent line approximation because it is derived by using the tangent line for the square root function to approximate its values. This approximation can also be derived by using the Taylor series of the square root function.
If we have a function that we wish to approximate using a tangent line to at , then the tangent line equation must be of the form:
as it is a straight line. At the point the following equations must be satisfied:
or in other words:
substituting the second of these equations into the first yields:
Which is the equation of the tangent line. If we let then:
If we let , a neighbouring number we know the square root of, and then this gives us:
Which is our approximation for .
While its approximation is often satisfactory, it can be substantially off if and differ by a number that is fairly large relative to .
In this case, you can use Newton's method to refine the approximation. Newton's method is a technique in which we essentially use the tangent line to approximate the zeros of a function. If one of that function's roots happens to be the square root you're searching for, applying Newton's method will have the result of giving you ever improving approximations to the square root. One function that is easy to exactly compute that's root is is . Newton's method then gives us this scheme for approximating :
Where is our th Newton's method approximation to .
A classic example where the tangent line approximation is pretty poor is when we are trying to find . The tangent line approximation gives:
When of course is to five decimal places , and the above approximation is only accurate to one significant figure (or not even that if you round it to the nearest integer). We can improve on this approximation using Newton's method. Our first iteration is:
Now this is clearly not accurate to five decimal places, so another iteration of Newton's method is required.
This is accurate to four decimal places, the fifth is not as we would round it up to two, not one. Running Newton's method again yields:
Which is an accurate approximation of to 11 decimal places.
The tangent line approximation yields:
The actual value of is , so this approximation will not suffice. Applying Newton's method yields:
Which is accurate to five decimal places.
The tangent line approximation yields:
The actual value of is approximately . The first iteration of Newton's method yields:
So this is only accurate to three decimal places. The next iteration of Newton's method yields:
Which is accurate to nine decimal places, or eight if we account for the effect of rounding.
Similarly the cube root of a number can be approximated by a tangent line approximation and refined using Newton's method. If we let , then and:
Letting , some neighbouring point we know the cube root of exactly, and , the number we want to find the cube root of we get (if ):
And to refine this approximation we can use Newton's method with and hence . Therefore we use the formula:
Here we use , as and hence we know the cube root of 64 is 4. Therefore our tangent line approximation to the cube root is:
The correct value of is approximately , so this is not accurate to five decimal places. So to improve this estimate we apply Newton's method:
Which is accurate to seven decimal places, or eight if we round off the remaining digits from there.
Newton's method can be used for finding the roots of any continuous real-valued function, or even system of continuous real-valued functions, although it has some shortcomings. One is that it requires an initial guess as to the root, and that its results can depend heavily on this initial guess. Additionally, it can sometimes fail to converge, such as when the root occurs at a stationary point for the function (maxima or minima), as there the derivative is zero. For example, it fails for the polynomial , as its only real root is at where the function is at minima. For polynomial equations that have real roots this failure to converge is uncommon provided the initial guess is reasonably close to the true value of the root.
Newton's method belongs to a class of methods known as the Householder's methods. Newton's method is the first order Householder's method, Halley's method is the second order Householder's method and it is possible to derive the th order Householder's method using the formula:
where the bracketed superscripts denotes derivatives of the superscript's order. For example denotes the th derivative of . The reason I mentioned this fact is that sometimes, especially when the lower order derivatives are equal to zero at the root, higher order Householder's methods may be superior.
Getting the initial guesses for the roots, especially of polynomials that have multiple real roots, can be a challenge. One technique is to use the bisection method. In this technique one takes an interval over which the sign of the function changes from positive to negative or vice versa (which implies that at least one root must lie within this interval) and continuously subdivides this interval and re-evaluates the function at its endpoints until we find the root. This method is very slow, so usually one would only apply it a few times, and then once our interval is acceptably small we would apply Newton's method to get the root. One can also evaluate the function at a long list of points within an interval and find where in the interval the sign changes, as that is where a root will be. If the interval is large enough and the function has multiple real roots there may be multiple sign change points and hence multiple roots we can converge to using Newton's method. Like Newton's method, however, it does not converge when the root is a minima or maxima.
These methods can be done by hand, but for many problems they can be the quite tedious; as such I wrote this Julia script that implements the bisection method and Newton's method to find the roots of a given function:
#!/usr/bin/env julia
"""
bisection(f::Function, N::Integer, a::Number, b::Number)
Uses the bisection method with N subdivisions of the specified domain [a, b]
to find the roots of the function f within said domain. Used to find the
initial guess that Newton's method then uses to get a more precise estimate of
the roots.
"""
function bisection(f, N, a, b)
x = LinRange(a, b, N+1)
fval = f.(x)
xv = x[2:end]
initGuess = xv[diff(sign.(fval)).!= 0]
return initGuess
end
"""
newtons(f::Function, h::Float, tol::Float, itMax::Integer,
initGuess::Vector{Float})
Uses Newton's method (and if needed, up to third order Householder's methods)
to refine initGuess, our initial guess of the root(s) of f, until either itMax
iterations has been performed or the relative magnitude of the update Newton's
provides to the estimate is below tol. h is the step size used to approximate
the derivative.
"""
function newtons(f, h, tol, itMax, initGuess)
# Derivatives
function fd(x, h)
return (f(x+h)-f(x-h))/(2*h)
end
function f2d(x, h)
return (f(x+h)-2*f(x)+f(x-h))/(h^2)
end
function f3d(x, h)
return (f(x+2*h)-2*f(x+h)+f(x-h)-f(x-2*h))/(2*h^3)
end
# Check to see if (relative) difference in update is below tol
function tolCheck(diff, sol, tol)
if ! (sol ≈ 0)
check = abs(diff/sol) > tol
else
check = abs(diff) > tol
end
return check
end
# Initialize sol
sol = initGuess
count = Int64.(zeros(size(initGuess)))
for j=1:length(initGuess)
diff = 1
while (tolCheck(diff, sol[j], tol) && count[j] < itMax)
fn = f(sol[j])
der = fd(sol[j], h)
der2 = f2d(sol[j], h)
der3 = f3d(sol[j], h)
if (der ≈ 0) && (der2 ≈ 0)
# Third order Householder's method
diff = (6*fn*der^2 - 3*fn^2*der2)/(6*der^3-6*fn*der*der2 + fn^2*der3)
elseif (der ≈ 0)
# Second order, Halley's method
diff = 2*fn*der/(-fn*der2 + 2*der^2)
else
# First order, Newton's method
diff = fn/der
end
sol[j] -= diff
count[j] += 1
end
if (count[j] == itMax)
print("Maximum iterations exceeded and the amount by which ")
println("Newton's last updated the solution was: ", diff)
end
end
# Delete NaN entries from sol and corresponding count entries
# While loop is used because sol's length will change over this loop
j = 1;
while j < length(sol)
if isnan(sol[j])
deleteat!(sol, j)
deleteat!(count, j)
end
j += 1;
end
return sol, count
end
"""
findRoot(f::Function, h::Float, tol::Float, itMax::Integer, a::Number, b::Number, N::Integer)
Uses the bisection method to get an initial guess of the root(s) of f on the
domain [a, b] with N subdivisions, then uses Newton's method with a maximum of
itMax iterations and a relative error tolerance of tol. h is the step size
used to approximate the derivative.
"""
function findRoot(f, h, tol, itMax, a, b, N)
initGuess = bisection(f, N, a, b)
sol, count = newtons(f, h, tol, itMax, initGuess);
return sol, count
end
"""
printSoln(sol::Vector{Float}, count::Vector{Integer})
Print the solution and the number of iterations used.
"""
function printSoln(sol, count)
if (length(sol) == 1)
println("Root = ", sol[1], ", count = ", count[1])
else
for i=1:length(sol)
println("The $(i)th root = ", sol[i], ", count = ", count[i])
end
end
end
# This is where you specify the function you want to
# find the root of
function f(x)
return x^4 + x^3 - 10x^2 - 4x + 16
end
# Initialize required variables
h = 1e-10
tol = 1e-15
itMax = 1000
a = -100
b = 100
N = 100000
sol, count = findRoot(f, h, tol, itMax, a, b, N)
printSoln(sol, count)
.