One of the main themes in numerics has been to greatly improve the scalability and performance of computations in Mathematica. In fact Version 4 marks the first installment of the multi-release project called gigaNumerics, a project intended to make Mathematica scalable to very large computations.
One of the very visible enhancements to the treatment of numbers includes being able to efficiently deal with very high-precision numbers. So, for instance, computing a hundred thousand digits of should only take a few seconds on most modern computers.
The key development making this possible is the use of asymptotically fast algorithms for multiplication as well as for evaluating constants and functions. Multiplication is a good example of algorithm development in Mathematica. In this case four different algorithms are used: a school book implementation for small numbers (); Karatsuba based multiplication (); DFT based multiplication (; and interleaved Karatsuba and DFT based multiplication. The reason that there are several algorithms is that each algorithm is optimal in an interval of precision. In the case of Karatsuba one can visualize the amount of work saved.
This displays the complexity of the algorithm for 0 to 6 levels of recursion (the number of recursions depends on the size of the input), where the area of this fractal approaches and represents the work done by the Karatsuba method compared to , which is the whole square and represents the amount of work done by the regular schoolbook multiplication algorithm.
Apart from working with numbers as whole entities there are enhancements that work on the structure of numbers. For example decimal expansions of rational numbers are eventually periodic as seen in the graph below and
Computing the digit expansion we get the periodic part of the expansion in a sublist, in effect denoting an infinite decimal expansion.
This is enough to get the full information back.
Another type of structure with numbers is their continued fraction expansion. As is the case for rational numbers and decimal expansions, square roots have an infinite periodic continued fraction expansion.
This is again enough to get the exact answer back.
Another low-level structure in numbers is to use the binary digit expansion and view that as a bitvector. Bitwise logical operations can then be performed on these.
Bitwise operations can often be found in low-level computer programming languages and are useful in combinatorial algorithms, for example. One difference with Mathematica is that these bit vectors are not a priori bounded, so long bitvectors like the 1.5 million length bitvector below works just fine.
For arrays of machine-sized numbers (
Below I use the function
This returns a value from both Version 3 and Version 4 in a list.
Below we generate 100,000 elements of basic machine number types, comparing the timing in Version 3 and Version 4 using packed array technology.
These are some simple functions that make good use of packed arrays, again showing the timing in Version 3 and Version 4.
A substantial part of the value of packed array technology comes from the fact that it is transparently used in the system. This means that many existing programs will directly benefit from these speedups without any changes whatsoever. However, there are explicit functions available in the new
This makes the
Many functions automatically generate packed arrays.
Performing list operations on packed arrays mostly produces packed arrays as a result.
However there are also operations that produce results that cannot be represented as a packed array. In this case a list of formulae.
In particular, packed arrays are preserved for graphics and importing and exporting, as well as MathLink applications. Below we import a binary
Improvements in dealing with arrays of numbers coupled with many algorithmic enhancements means that much larger-scale data processing now is feasible. For instance computing discrete Fourier transforms.
Another new related feature is discrete convolution and correlation, which is useful for filtering, time series, and finite differencing. Below we import a JPEG image.
This applies the filter that approximates a derivative (or edge detection) to the resulting image.
There are a number of improvements to the high-level solvers of Mathematica. This is a sampling of some of these improvements.
Here we compute the solution using digits of precision.
As can be seen, the loss of precision is quite large, which is generally to be expected for polynomial equations, as this is the canonical example of a numerically sensitive problem.
Other solvers such as
SparseLinearSolve is a method for solving, usually large, sparse linear equations. This is another example of a
The following generates a sparse matrix, which is represented in the form where only the elements that are nonzero are represented in the list.
This generates a 100,000 by 100,000 matrix and solves the resulting linear equations.
Converted by Mathematica June 4, 2000