The Mathematica Journal
Volume 9, Issue 2

Search

In This Issue
Articles
Tricks of the Trade
In and Out
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Download This Issue 

About the Journal
Editorial Policy
Staff
Submissions
Subscriptions
Advertising
Back Issues
Contact Information

Integral Equations
Stan Richardson

Introductory Example

As a first example, we consider the integral equation discussed by Kress [1, 2], namely

which has the exact solution . Kress quotes errors only at the points , , , , and 1, that at always being the largest. Among the numerical methods and computations he describes for this example, this largest error is least for Nyström's method using Simpson's rule with 16 intervals, when it is .

A classical method of solving such problems is to use iteration. Beginning with some suitable initial function , we construct a sequence of functions by

For this example, the iterative scheme converges for any continuous , and we choose to take , setting this as

For comparison purposes, we also have the exact solution

and can keep track of the error using

It is useful to have a visual record of this error, so we introduce

At this stage, our approximate solution is not very impressive.

With this example, it is theoretically possible to obtain analytic expressions for the functions but, as this is not feasible for more general examples, we use a numerical approach and adopt the following procedure. With known, we compute from the iteration formula at a number of points in the interval , and make an InterpolatingPolynomial fitted through these points. As a first try, we use a uniform distribution of points on , and take just 11 such points. The table of values is given by

and the improved approximation is then

As our plot shows, we now have a better approximation.

We can combine all this into one step.

Now a little experimentation with iterstep and errorplot allows us to observe the convergence, but, once done, we know a suitable number of steps to take to obtain a satisfactory answer. For this first example, we demonstrate the convergence graphically in order to illustrate a number of features of our method. To accomplish this, we introduce errorvalue, which estimates the maximum error at each iteration from the adaptively-determined PlotRange of errorplot.

The convergence is nicely illustrated using LogListPlot, for which we need to load a package. In fact, other functions from the Graphics` package will be required later, so we use

We now perform 60 further iterations and plot the result.

It is clear that we have now converged to an acceptable solution and that further iterations will produce no significant improvement. The remaining error is due solely to the interpolation process we have adopted: we have the best solution to our problem that can be represented in this form. As this makes clear, it is important that an appropriate interpolation routine be used. It is an instructive exercise to try using Interpolation with the present example, for this produces results that are vastly inferior, and later examples will require the use of more ingenuity in this respect.

We can show the distribution of the error in the solution we have obtained.

When comparing with the results recorded by Kress, remember that he quotes errors only at five specific points.

The error distribution here is typical of that obtained when using a uniform distribution of interpolation points: the maximum errors occur near the ends of the interval. This feature can be eliminated and the maximum error reduced by using a nonuniform distribution of interpolation points that places more of them near the ends. Here, we can achieve this via a nonlinear scaling of the -axis.

Modifying iterstep to incorporate this, we now use

and find that a few more iterations have indeed improved matters.



     
About Mathematica | Download Mathematica Player
Copyright © Wolfram Media, Inc. All rights reserved.