Javad Abdalkhani

Picards iteration is used to find the analytical solutions of some AbelVolterra equations. For many equations, the integrals involved in Picards iteration cannot be evaluated. The author approximates the solutions of those equations employing a semi-implicit product midpoint rule.The Aitken extrapolation is used to accelerate the convergence of both methods. A blow-up phenomena model, a wave propagation, and a superfluity equation are solved to show the practicality of the methods. Programs offered solve the general equations. A user needs only to enter the particular forcing and kernel functions, constants, and a step size for a particular problem.

Introduction

AbelVolterra equations are normally represented by

(1)

Equation (1) is called regular if and weakly singular (or of Abel type) if . Equation (1) is linear if ; otherwise, it is nonlinear. In most practical applications, is either 0 or 1/2. See [14] for conditions on existence, uniqueness, and continuity of a solution for equation (1). To solve equation (1) analytically, one normally employs the Picard method, a method of successive iterations, given by

(2)
(3)

From a theoretical point of view, the successive iterations given by equations (2) and (3) always converge for linear equations; see theorem 10.15, page 152 of [5] and pages 9295 of [4]; see also solutions of some integral equations by the Picard method in [2]. In practice the convergence of successive iterations depends on the computability of the corresponding integrals in equation (3). One might be able to evaluate the integrals in some cases. Successive iterations may also be effective for some nonlinear equations. In what follows, we first introduce a simple program that implements the successive iterations and solve two examples using this program. For many integral equations whose exact solutions cannot be found by the Picard method, we approximate their analytical solution using a semi-implicit product midpoint rule. Two practical examples are solved to test the validity of this numerical approach.

Picards Iteration

The following program implements equations (2) and (3).

One needs only to introduce the kernel Ker, the forcing function g, and the real value α to solve the corresponding equation.

Statement and Solution of Example 1

Example 1

To solve the equation

(4)

note that

(5)

where is the well-known beta function defined by

(6)

and is the gamma function.

Then it is easy to verify that the outputs produced by the Picard method for this example can be written as

(7)

where , , , , , and .

Now substitute the corresponding kernel, forcing function, and the value of . The evaluation takes a minute or two.

This graph shows convergence of , for .
Nonlinear Accelerators to Speed Up the Convergence

Aitkens method accelerates the convergence [6], but accelerators like this one are highly unstable numerically. One has to pay careful attention (in particular for division by zero) when working with such accelerators; it is important to use high precision. The following is a program to accelerate Picards iteration using the already developed PicardIteration program.

The analytic solution of is (see [3]). Use that to compare with the result of the Aitken accelerator.

Define res1 and plot the error.

This graph shows the error in applying the accelerated Picards method on equation (4). We note that uses only the values of to and provides an excellent approximation for , the exact solution of equation (4).

Equation (7) can be used to find , for very large values of . This equation was obtained as a result of analyzing the corresponding Picard iteration. The same type of analysis will be used for our next nonlinear example.

Method Analysis for Example 2

Example 2

We solve a nonlinear blow-up phenomena model,

(8)

Equation (8) is mentioned as a a blow-up phenomena model on p. 417 of [7]. The analytical solution exhibits blow-up at finite time. A blow-up means there is a finite time such that

(9)

Our goal is to find the value of as accurately as possible.

Implementing a few steps of Picards iteration to equation (8) shows that the solution is of the form

(10)

which implies that

(11)

Make a change of variable from to in equation (8) and replace and using equations (10) and (11), respectively, on both sides of equation (8). We obtain

(12)

where

(13)

Equating coefficients corresponding to equal powers of from both sides, we get

(14)

Note that

(15)

and that is a power series in . Therefore, the radius of convergence for given by equation (10) is , which is the blow-up number in equation (9).

All terms in equation (10) are positive, and the series is wildly divergent beyond 0.897677. For a series with all positive terms, the nonlinear accelerators are not useful in evaluating beyond the radius of convergence. For an alternating series, the situation is different. Nonlinear extrapolations are normally quite effective in evaluating the sums of alternating divergent series for variable values far beyond the radius of convergence.

We evaluate

(16)

for large values of , where are given by equation (14).

Solution of Example 2

For the following tables, the execution time is provided for cases and .

These graphs clearly demonstrate that , as .

The following program is for the accelerated Picard method for example 2. All terms in equation (16) are positive, and the acceleration is not as helpful as in example 1, where the series was alternating.

uses the values of to and yet demonstrates the blow-up phenomena as .

Numerical Approximation

For most integral equations, Picard iteration is not practical, and we approximate the solution of equation (1) using a semi-implicit product midpoint rule. To understand the method, we start by subdividing the interval of integration into equal subintervals using a step size . Equation (1) becomes

(17)

note that . For , we only have the subinterval . Let (the middle variable of the kernel function ) be the midpoint of the interval ; that is, . Let (the third variable of ) be the midpoint of and ; that is, , and recall that . The denominator of the integral that contains the singularity stays as is, so

(18)

We solve equation (17) for using Mathematicas built-in function FindRoot with an initial guess of . The integral that contains the singularity is solved exactly and therefore does not introduce any inaccuracy in the method, which is why the word product rule is added to the name of this technique; see [8] for more details on product integration. Also, at each step only the very last variable (in this case ) needs to be found, which is why the name semi-implicit is used. We also use the midpoints of the intervals; hence the term a semi-implicit product midpoint rule. Now let to get

(19)

Or

(20)

Then equation (19) is solved to find an approximation for using FindRoot with an initial guess of from the previous step. Continuing the same procedure, we arrive at the following program.

A Program for the Semi-implicit Product Midpoint Rule and Its Corresponding Extrapolation to Approximate the Solution of Equation (1)

Solution of Example 3

Example 3

The nonlinear integral equation

(21)

arises in the theory of superfluity [9].

We use the semi-implicit product midpoint rule program to approximate the solution of equation (1).

Graph of superfluity equation with no acceleration and a step size of 0.01 and .

Graph of superfluity equation with acceleration and a step size of 0.01.

This graph shows the difference between the accelerated and non-accelerated methods for the superfluity equation. The difference is of order , which shows that iterating the extrapolation once more is not sensible. The execution time goes up exponentially, and not much is gained in accuracy.

For our next example, we were able to find the corresponding analytical solution. Therefore, we can demonstrate the efficiency of our method by comparing the numerical solution with the exact solution.

Before solving example 4, we prove the following theorem for a class of inhomogeneous equations to find its analytic solution.

Theorem 1

The equation
(22)
is equivalent to the differential equation
(23)

Proof

From the general theory of the Abel equations ([2], p. 224), if
(24)
then
(25)
Equation (22) can be written as
(26)
Using equations (24), (25), and (26), we get
(27)
Now on the right side of equation (27), replace the integral by from equation (26) to get the desired result, equation (23).

We now use Theorem 1 to find the exact solution of wave propagation for example 4 and test the efficiency of our midpoint rule and its extrapolated version.

Example 4

The equation

(28)

represents wave propagation over a flat surface (see p. 229 and p. 235, exercise 3 of [2]). Using Theorem 1, the exact solution of equation (28) can be obtained as

(29)
Solution of Example 4
Solving equation (28) with our midpoint rule and an Aitken extrapolation with an step size , we obtain an approximation with a maximum absolute error of order .

Example 4 is the three-dimensional case. Therefore, the program for the general midpoint rule given for equation (1) must be adjusted a little.

This plots the error in applying the midpoint rule with a step size ; for and , with , the maximum absolute error is of order 0.001.

This plots the error in applying the Aitken extrapolation with a step size ; for and , , the maximum absolute error is of order .

Conclusion

We studied two methods: Picards iteration and a semi-implicit product midpoint rule. The Aitken extrapolation was used to accelerate the convergence of these methods. In some cases, the extrapolation demonstrated a significant improvement. A blow-up phenomena model, a wave propagation, and a superfluity equation were solved, and the efficiency and the practicality of the methods were established. The user-friendly programs created here solve the general equations. One only needs to enter a forcing function, a kernel function, the value, and a step size for a particular problem. We used Mathematica 10.2 on the Mac OS X operating system with 16 GB RAM and a 2.8 GHz processor. The execution time was always under five minutes, and the vast majority of problems were executed in less than five seconds.

Acknowledgments

I am grateful for constructive suggestions by a reviewer, resulting in more transparent coding.

Dedication

The author dedicates his work to Mahshid, Arman, and Ida; wife, son, and daughter.

References

[1] W. Hackbusch, Integral Equations: Theory and Numerical Treatment, Boston: Birkhäuser Verlag, 1995.
[2] R. P. Kanwal, Linear Integral Equations, 2nd ed., Boston: Birkhäuser, 1997.
[3] R. K. Miller, Nonlinear Volterra Integral Equations, Menlo Park, CA: W. A. Benjamin, Inc., 1971.
[4] A. Pipkin, A Course on Integral Equations, New York: Springer-Verlag, 1991.
[5] R. Kress, Linear Integral Equations, Berlin: Springer-Verlag, 1989.
[6] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, 2nd ed., New York: Springer, 1996.
[7] H. Brunner, Collocation Methods for Volterra and Related Functional Equations, Cambridge: Cambridge University Press, 2004.
[8] P. Linz, Analytical and Numerical Methods for Volterra Equations, Philadelphia: SIAM, 1985.
[9] N. Levinson, A Nonlinear Volterra Equation Arising in the Theory of Superfluidity, Journal of Mathematical Analysis and Applications, 1(1), 1960 pp. 111.
doi:10.1016/0022-247X(60)90028-7.
J. Abdalkhani, Exact and Approximate Solutions of the AbelVolterra Equations, The Mathematica Journal, 2016. dx.doi.org/doi:10.3888/tmj.18-2.

About the Author

Javad Abdalkhani is an associate professor of mathematics at the Ohio State University, Lima campus and a Distinguished Alumni teacher at the Ohio State University. His area of research is numerical analysis. His hobbies are reading and cycling.

Javad Abdalkhani
Department of Mathematics
The Ohio State University, Lima
4240 Campus Drive
Lima, Ohio 45804
abdalkhani.1@osu.edu