FunctionIntegrator.jl   FunctionIntegrator.jl should be treated as a second-rate alternative to the excellent QuadGK package. QuadGK provides more accurate integration for many problems, and also provides an error estimate which functions in this package do not.

This package provides the following functions:

• chebyshev_quadrature(f::Function, N::Number, k::Integer, a::Number, b::Number)

• jacobi_quadrature(f::Function, N::Number, α::Number, β::Number, a::Number, b::Number)

• rectangle_rule_left(f::Function, N::Number, a::Number, b::Number)

• rectangle_rule_midpoint(f::Function, N::Number, a::Number, b::Number)

• rectangle_rule_right(f::Function, N::Number, a::Number, b::Number)

• rombergs_method(f::Function, N::Number, a::Number, b::Number)

• simpsons_rule(f::Function, N::Number, a::Number, b::Number)

• simpsons38_rule(f::Function, N::Number, a::Number, b::Number)

• trapezoidal_rule(f::Function, N::Number, a::Number, b::Number)

use Julia's help function (e.g. by typing ?chebyshev_quadrature) to find out usage information, should you need it. The choice of function table below also explains some of the details of each of these functions, such as their arguments.

This package is currently in Julia's General registry, thus to install it one merely needs to run:

and import it using:

julia> using FunctionIntegrator

Choice of function

As a general rule of thumb, adaptive_simpsons_rule should be the function you use when you are unsure which function to use, as its accuracy is more predictable. The main time when adaptive_simpsons_rule should be avoided is when there are unremovable singularities at the endpoints of the domain of integration, in which case using chebyshev_quadrature with $$k=1$$ or using legendre_quadrature is likely best.

Each of the functions whose name ends with _quadrature uses Gaussian quadrature is the specifics of which differ between functions. Out of them, legendre_quadrature is perhaps the best to go with when you are uncertain which out of the _quadrature functions to go with as it can be applied to any grid and any problem and usually arrives at a result that is accurate to at least 7 decimal places provided $$N \geq 1 \times 10^{4}$$. legendre_quadrature has the added benefit of being perhaps the fastest _quadrature function after controlling for accuracy of the result obtained.

The test/ folder has test scripts that approximate various different integrals (each file, except test/runtests.jl, pertains to a different integral); the N values given are the smallest possible to pass each of the tests listed (except when the test involves a less than (<) sign). If you want to know which function to use for which integral is these tests may be useful as a rough guide. Figure 1: a bar graph to compare the computation times for each fo the flexible-domain methods provided by the package except for rectange_rule_left and rectangle_rule_right.1 Data is from this build.
FunctionDomain2WeightArgumentsNotes
adaptive_simpsons_rule$$[a,b]$$N/Af is the function being integrated.
ε is the relative tolerance in the solution (that is, the maximum amount of relative error in the solution you are willing to tolerate).
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses adaptive Simpson's rule with the Lyness criterion.
chebyshev_quadrature$$[a,b]$$
$$-1\leq x \leq 1$$
$$k=1$$: $$\dfrac{1}{\sqrt{1-x^2}}$$

$$k=2$$: $$\sqrt{1-x^2}$$

$$k=3$$: $$\sqrt{\dfrac{1+x}{1-x}}$$

$$k=4$$: $$\sqrt{\dfrac{1-x}{1+x}}$$
f is the function being integrated.
N is the number of grid points.
k is the type of Chebyshev quadrature being used. 1, 2, 3, and 4 refer to the Chebyshev $$T_n$$, $$U_n$$, $$V_n$$ and $$W_n$$ polynomials respectively.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Chebyshev-Gauss quadrature (note this article does not mention 3rd and 4th quadrature types, corresponding to $$k=3$$ and $$k=4$$, respectively). If there are unremovable singularities at the endpoints, it with $$k=1$$ or legendre_quadrature are preferred.
hermite_quadrature$$[-\infty,\infty]$$$$e^{-x^2}$$f is the function being integrated.
N is the number of grid points.
k determines the problem being solved (whether $$e^{-x^2}$$ is assumed to be part of the integrand ($$k=2$$) or not).
Uses Gauss-Hermite quadrature. Only use this if your integration domain is $$[-\infty,\infty]$$ and your integrand rapidly goes to zero as the absolute value of $$x$$ gets larger.
jacobi_quadrature$$[a,b]$$
$$-1\leq x \leq 1$$
$$(1-x)^{\alpha}(1+x)^{\beta}$$f is the function being integrated.
N is the number of grid points.
α and β are parameters of the weighting function.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Gauss-Jacobi quadrature. After controlling for N it is one of the slowest quadrature methods. A condition of Gauss-Jacobi quadrature is that $$\alpha, \beta \gt -1$$. When $$\alpha = \beta = 0$$, this reduces to legendre_quadrature.
laguerre_quadrature$$[0,\infty]$$$$e^{-x}$$f is the function being integrated.
N is the number of grid points.
k determines the problem being solved (whether $$e^{-x}$$ is assumed to be part of the integrand ($$k=2$$) or not).
Uses Gauss-Laguerre quadrature. Only use this if your integration domain is $$[0,\infty]$$ and your integrand rapidly goes to $$0$$ as $$x$$ gets larger.
legendre_quadrature$$[a,b]$$
$$-1\leq x \leq 1$$
1f is the function being integrated.
N is the number of grid points.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Gauss-Legendre quadrature. Generally, this is the best _quadrature function to go with when you are otherwise unsure which to go with.
lobatto_quadrature$$[a,b]$$
$$-1\leq x \leq 1$$
1f is the function being integrated.
N is the number of grid points.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Gauss-Lobatto quadrature. This function includes, in the calculation is the values of the integrand at one of the endpoints. Consequently, if there are unremovable singularities at the endpoints, this function may fail to give an accurate result even if you adjust the endpoints slightly to avoid the singularities.
radau_quadrature$$[a,b]$$
$$-1\leq x \leq 1$$
1f is the function being integrated.
N is the number of grid points.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Gauss–Radau quadrature, for which there is no Wikipedia article is the best article (simplest) I could find on it are these lecture notes. This function includes, in the calculation is the values of the function at the endpoints. Consequently, if there are unremovable singularities at either or both of the endpoints, this function will fail to give an accurate result even if you adjust the endpoints slightly to avoid the singularities.
rectangle_rule_left$$[a,b]$$N/Af is the function being integrated.
N. $$N$$ is the number of grid points used in the integration.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses the rectangle rule, specifically the left Riemann sum. Usually this or rectangle_rule_right is the least accurate method. In fact, many of the tests in the FunctionIntegrator.jl repository fail to get accuracy to 7 significant figures with rectangle_rule_left with any practically viable value of N.
rectangle_rule_midpoint$$[a,b]$$N/Af is the function being integrated.
N. $$N$$ is the number of grid points used in the integration.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses the rectangle rule, specifically the Riemann midpoint rule. Usually this is more accurate than rectangle_rule_left and rectangle_rule_right and sometimes rivals trapezoidal_rule for accuracy. Interestingly, going by my Travis tests it appears to be even more efficient than simpsons_rule.
rectangle_rule_right$$[a,b]$$N/Af is the function being integrated.
N. $$N$$ is the number of grid points used in the integration.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses the rectangle rule, specifically the right Riemann sum. Usually this or rectangle_rule_left is the least accurate method. In fact, many of the tests in the FunctionIntegrator.jl repository fail to get accuracy to 7 significant figures with rectangle_rule_right with any practically viable value of N.
rombergs_method$$[a,b]$$N/Af is the function being integrated.
N. Equal to $$n$$ in this article that describes the method.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Romberg's method. If your PC has only 16 GB RAM, you should not exceed $$N=30$$, as otherwise you will likely run out of RAM. For most problems, $$N\leq20$$ should suffice.
simpsons_rule$$[a,b]$$N/Af is the function being integrated.
N. $$N+1$$ is the number of grid points, if endpoints are included. $$N$$ must be even, otherwise this function will throw an error.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Simpson's rule. It is one of the best functions to use when you are unsure which to use, provided there are no unremovable singularities within the integration domain, including the endpoints.
simpsons38_rule$$[a,b]$$N/Af is the function being integrated.
N. $$N+1$$ is the number of grid points, if endpoints are included. $$N$$ must be divisible by three, as otherwise this function will throw an error.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses Simpson's 3/8 rule. It is less robust than simpsons_rule.
trapezoidal_rule$$[a,b]$$N/Af is the function being integrated.
N. $$N+1$$ is the number of grid points, if endpoints are included.
a is the start of the domain of integration.
b is the end of the domain of integration.
Uses the trapezoidal rule. It has the same caveats as simpsons_rule.

Notes:

1. rectangle_rule_left, rectangle_rule_midpoint and rectangle_rule_right are not included because they failed to provide the required level of accuracy for many tests with all practically viable N values.

2. The $$-1\leq x \leq 1$$ refers to the quadrature nodes, which are also referred to in the weighting function column.

Examples

In the following example, the following integral (henceforth called integral 1) is being computed:

$\int_0^{\frac{\pi}{2}} \cos{x} \hspace{0.1cm} dx$

and the result compared to the analytical solution of $$1$$.

using FunctionIntegrator
a = abs(adaptive_simpsons_rule(x -> cos(x), 0, pi/2)-1);
show(a)
5.160893934430533e-11

In the following examples, integral 1 is being computing and the result compared to the analytical solution of $$1$$. The value being printed is difference between the computed solution and the analytical solution.

$$k=1$$:

using FunctionIntegrator
a = abs(chebyshev_quadrature(x -> cos(x), 1000, 1, 0, pi/2) - 1);
show(a)
3.229819229844111e-7

$$k=2$$:

using FunctionIntegrator
a = abs(chebyshev_quadrature(x -> cos(x), 1000, 2, 0, pi/2) - 1);
show(a)
6.446739609922147e-7

$$k=3$$:

using FunctionIntegrator
a = abs(chebyshev_quadrature(x -> cos(x), 1000, 3, 0, pi/2) - 1);
show(a)
6.453189421717909e-7

$$k=4$$:

using FunctionIntegrator
a = abs(chebyshev_quadrature(x -> cos(x), 1000, 4, 0, pi/2) - 1);
show(a)
3.226596516636704e-7

In the following examples is the integral:

$\int_{-\infty}^{\infty} x^2 e^{-x^2} dx.$

the exact solution of which is $$\dfrac{\sqrt{\pi}}{2}$$, will be approximated using hermite_quadrature and the result will be compared to the analytical solution. The plot of the integrand is (truncated to the domain $$x\in[-10,10]$$):

using PyPlot
x = LinRange(-10,10,1001);
clf()
y = (x.^2).*exp.(-x.^2);
PyPlot.plot(x,y)
PyPlot.savefig(joinpath(@OUTPUT, "hermite_plot.png"), dpi=80) $$k=1$$:

using FunctionIntegrator
a = abs(hermite_quadrature(x -> x^2*exp(-x^2), 100, 1)-sqrt(pi)/2)
show(a)
1.5543122344752192e-15

$$k=2$$:

using FunctionIntegrator
a = abs(hermite_quadrature(x -> x^2, 100, 2)-sqrt(pi)/2)
show(a)
1.5543122344752192e-15

In this example, we will approximate integral 1 and compare the result to the analytical result of $$1$$. For this example, we are setting both $$\alpha$$ and $$\beta$$ to 1.

using FunctionIntegrator
a = abs(jacobi_quadrature(x -> cos(x), 1000, 1, 1, 0, pi/2)-1);
show(a)
MethodError: no method matching asy1(::Int64, ::Int64, ::Int64, ::Int64)
Closest candidates are:
asy1(::Integer, !Matched::Float64, !Matched::Float64, ::Integer) at /home/runner/.julia/packages/FastGaussQuadrature/BRLTf/src/gaussjacobi.jl:200

In this section is the integral:

$\int_0^{\infty} xe^{-x} dx$

is being approximated and the result compared to the analytical result of 1. The integrand has the following curve:

using PyPlot
x = LinRange(0,15,1001);
clf()
y = x.*exp.(-x);
PyPlot.plot(x,y)
PyPlot.savefig(joinpath(@OUTPUT, "laguerre_plot.png"), dpi=80) $$k=1$$:

using FunctionIntegrator
a = abs(laguerre_quadrature(x -> x*exp(-x), 100, 1)-1)
show(a)
3.1086244689504383e-15

$$k=2$$:

using FunctionIntegrator
a = abs(laguerre_quadrature(x -> x, 100, 2)-1)
show(a)
3.1086244689504383e-15

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(legendre_quadrature(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
0.0

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(lobatto_quadrature(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
8.881784197001252e-16

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
show(a)
3.3306690738754696e-16

rectangle_rule_left

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(rectangle_rule_left(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
0.00078519254663445

rectangle_rule_midpoint

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(rectangle_rule_midpoint(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
1.0280839091159066e-7

rectangle_rule_right

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(rectangle_rule_right(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
0.000785603780161015

rombergs_method

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(rombergs_method(x -> cos(x), 10, 0, pi/2)-1)
show(a)
4.440892098500626e-16

simpsons_rule

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(simpsons_rule(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
8.215650382226158e-15

simpsons38_rule

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(simpsons38_rule(x -> cos(x), 1002, 0, pi/2)-1)
show(a)
8.08242361927114e-14

trapezoidal_rule

In this section, integral 1 is being approximated and the result compared to the analytical result.

using FunctionIntegrator
a = abs(trapezoidal_rule(x -> cos(x), 1000, 0, pi/2)-1)
show(a)
2.0561676405961293e-7
function f

Error analysis

Code

using FunctionIntegrator, PyPlot
nc = round.(exp.(1/4*log(10)*(1:24)));
chebyshev1_error         = zeros(length(nc));
chebyshev2_error         = zeros(length(nc));
chebyshev3_error         = zeros(length(nc));
chebyshev4_error         = zeros(length(nc));
jacobi_error             = zeros(length(nc));
laguerre_error           = zeros(length(nc));
legendre_error           = zeros(length(nc));
lobatto_error            = zeros(length(nc));
rectangle_left_error     = zeros(length(nc));
rectangle_midpoint_error = zeros(length(nc));
rectangle_right_error    = zeros(length(nc));
rombergs_error           = zeros(30);
simpsons_error           = zeros(length(nc));
simpsons38_error         = zeros(length(nc));
trapezoidal_error        = zeros(length(nc));

for i=1:length(nc)
chebyshev1_error[i]         = abs(chebyshev_quadrature(x -> sech(x), nc[i], 1, 0, 100)-pi/2);
chebyshev2_error[i]         = abs(chebyshev_quadrature(x -> sech(x), nc[i], 2, 0, 100)-pi/2);
chebyshev3_error[i]         = abs(chebyshev_quadrature(x -> sech(x), nc[i], 3, 0, 100)-pi/2);
chebyshev4_error[i]         = abs(chebyshev_quadrature(x -> sech(x), nc[i], 4, 0, 100)-pi/2);
jacobi_error[i]             = abs(jacobi_quadrature(x -> sech(x), nc[i], 1, 1, 0, 100)-pi/2);
laguerre_error[i]           = abs(laguerre_quadrature(x -> sech(x), nc[i], 1)-pi/2);
legendre_error[i]           = abs(legendre_quadrature(x -> sech(x), nc[i], 0, 100)-pi/2);
lobatto_error[i]            = abs(lobatto_quadrature(x -> sech(x), nc[i], 0, 100)-pi/2);
rectangle_left_error[i]     = abs(rectangle_rule_left(x -> sech(x), nc[i], 0, 100)-pi/2);
rectangle_midpoint_error[i] = abs(rectangle_rule_midpoint(x -> sech(x), nc[i], 0, 100)-pi/2);
rectangle_right_error[i]    = abs(rectangle_rule_right(x -> sech(x), nc[i], 0, 100)-pi/2);
simpsons_error[i]           = abs(simpsons_rule(x -> sech(x), 2*round(nc[i]/2), 0, 100)-pi/2);
simpsons38_error[i]         = abs(simpsons38_rule(x -> sech(x), 3*round(nc[i]/3), 0, 100)-pi/2);
trapezoidal_error[i]        = abs(trapezoidal_rule(x -> sech(x), nc[i], 0, 100)-pi/2);
end

for i=1:30
rombergs_error[i]           = abs(rombergs_method(x -> sech(x), i, 0, 100)-pi/2);
end

PyPlot.figure(1)
PyPlot.clf()
PyPlot.plot(nc, chebyshev1_error);
PyPlot.title("Chebyshev k=1")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "chebyshev1_error_plot.png"), dpi=80)
PyPlot.figure(2)
PyPlot.clf()
PyPlot.plot(nc, chebyshev2_error);
PyPlot.title("Chebyshev k=2")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "chebyshev2_error_plot.png"), dpi=80)
PyPlot.figure(3)
PyPlot.clf()
PyPlot.plot(nc, chebyshev3_error);
PyPlot.title("Chebyshev k=3")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "chebyshev3_error_plot.png"), dpi=80)
PyPlot.figure(4)
PyPlot.clf()
PyPlot.plot(nc, chebyshev4_error)
PyPlot.title("Chebyshev k=4")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "chebyshev4_error_plot.png"), dpi=80)
PyPlot.figure(5)
PyPlot.clf()
PyPlot.plot(nc, jacobi_error)
PyPlot.title("Jacobi")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "jacobi_error_plot.png"), dpi=80)
PyPlot.figure(6)
PyPlot.clf()
PyPlot.plot(nc, laguerre_error)
PyPlot.title("Laguerre")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "laguerre_error_plot.png"), dpi=80)
PyPlot.figure(7)
PyPlot.clf()
PyPlot.plot(nc, legendre_error)
PyPlot.title("Legendre")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "legendre_error_plot.png"), dpi=80)
PyPlot.figure(8)
PyPlot.clf()
PyPlot.plot(nc, lobatto_error)
PyPlot.title("Lobatto")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "lobatto_error_plot.png"), dpi=80)
PyPlot.figure(9)
PyPlot.clf()
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.figure(10)
PyPlot.clf()
PyPlot.plot(nc, rectangle_left_error)
PyPlot.title("Rectangle rule (left)")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "rectangle_left_error_plot.png"), dpi=80)
PyPlot.figure(11)
PyPlot.clf()
PyPlot.plot(nc, rectangle_midpoint_error)
PyPlot.title("Rectangle rule (midpoint)")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "rectangle_midpoint_error_plot.png"), dpi=80)
PyPlot.figure(12)
PyPlot.clf()
PyPlot.plot(nc, rectangle_right_error)
PyPlot.title("Rectangle rule (right)")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "rectangle_right_error_plot.png"), dpi=80)
PyPlot.figure(13)
PyPlot.clf()
PyPlot.plot(1:30, rombergs_error)
PyPlot.title("Romberg's method")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "rombergs_error_plot.png"), dpi=80)
PyPlot.figure(14)
PyPlot.clf()
PyPlot.plot(2*round.(nc/2), simpsons_error)
PyPlot.title("Simpson's rule")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "simpsons_error_plot.png"), dpi=80)
PyPlot.figure(15)
PyPlot.clf()
PyPlot.plot(3*round.(nc/3), simpsons38_error)
PyPlot.title("Simpson's 3/8 rule")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "simpsons38_error_plot.png"), dpi=80)
PyPlot.figure(16)
PyPlot.clf()
PyPlot.plot(nc, trapezoidal_error)
PyPlot.title("Trapezoidal rule")
xscale("log")
xlabel("N")
yscale("log")
ylabel("Error")
PyPlot.savefig(joinpath(@OUTPUT, "trapezoidal_error_plot.png"), dpi=80)