The Mathematica Journal
Feature Articles
New Products
New Publications
News Bulletins
Write Us
About the Journal
Staff and Contributors
Back Issues
Download this Issue


One of the main themes in numerics has been to greatly improve the scalability and performance of computations in Mathematica. In fact Version 4 marks the first installment of the multi-release project called gigaNumerics, a project intended to make Mathematica scalable to very large computations.


One of the very visible enhancements to the treatment of numbers includes being able to efficiently deal with very high-precision numbers. So, for instance, computing a hundred thousand digits of [Graphics:../Images/index_gr_1.gif] should only take a few seconds on most modern computers.


The key development making this possible is the use of asymptotically fast algorithms for multiplication as well as for evaluating constants and functions. Multiplication is a good example of algorithm development in Mathematica. In this case four different algorithms are used: a school book implementation for small numbers ([Graphics:../Images/index_gr_4.gif]); Karatsuba based multiplication ([Graphics:../Images/index_gr_5.gif]); DFT based multiplication ([Graphics:../Images/index_gr_6.gif]; and interleaved Karatsuba and DFT based multiplication. The reason that there are several algorithms is that each algorithm is optimal in an interval of precision. In the case of Karatsuba one can visualize the amount of work saved.


This displays the complexity of the algorithm for 0 to 6 levels of recursion (the number of recursions depends on the size of the input), where the area of this fractal approaches [Graphics:../Images/index_gr_9.gif] and represents the work done by the Karatsuba method compared to [Graphics:../Images/index_gr_10.gif], which is the whole square and represents the amount of work done by the regular schoolbook multiplication algorithm.



Apart from working with numbers as whole entities there are enhancements that work on the structure of numbers. For example decimal expansions of rational numbers are eventually periodic as seen in the graph below and RealDigits correctly identifies that.



Computing the digit expansion we get the periodic part of the expansion in a sublist, in effect denoting an infinite decimal expansion.


This is enough to get the full information back.


Another type of structure with numbers is their continued fraction expansion. As is the case for rational numbers and decimal expansions, square roots have an infinite periodic continued fraction expansion.


This is again enough to get the exact answer back.


Another low-level structure in numbers is to use the binary digit expansion and view that as a bitvector. Bitwise logical operations can then be performed on these.


Bitwise operations can often be found in low-level computer programming languages and are useful in combinatorial algorithms, for example. One difference with Mathematica is that these bit vectors are not a priori bounded, so long bitvectors like the 1.5 million length bitvector below works just fine.


Packed Arrays

For arrays of machine-sized numbers (Integer, Real, and Complex) we have introduced a packed storage format as well as tight integration with the Mathematica virtual machine (or the Mathematica compiler). This leads to quite significant memory and time savings. The integration is analogous to how Mathematica makes use of machine-precision computations for scalars whenever possible, but in this case it is applied to arrays (or tensors) of machine numbers.

Below I use the function R that remotely evaluates the input in Mathematica Version 3 and Version 4 simultaneously, performing a RemoteEvaluate from the Parallel Computing Toolkit. The memory savings for Integer, Real, and Complex is roughly a factor of 5, 2.5, and 4 respectively. The time savings vary with the computation at hand.

This returns a value from both Version 3 and Version 4 in a list.


Below we generate 100,000 elements of basic machine number types, comparing the timing in Version 3 and Version 4 using packed array technology.


These are some simple functions that make good use of packed arrays, again showing the timing in Version 3 and Version 4.


A substantial part of the value of packed array technology comes from the fact that it is transparently used in the system. This means that many existing programs will directly benefit from these speedups without any changes whatsoever. However, there are explicit functions available in the new Developer` context, a context containing documented functions useful for people doing advanced development with Mathematica, that allow you to test for and convert between packed and unpacked storage of arrays.

This makes the Developer` functions available.


Many functions automatically generate packed arrays.


Performing list operations on packed arrays mostly produces packed arrays as a result.


However there are also operations that produce results that cannot be represented as a packed array. In this case a list of formulae.


In particular, packed arrays are preserved for graphics and importing and exporting, as well as MathLink applications. Below we import a binary MAT file and visualize the matrix.



Data Processing

Improvements in dealing with arrays of numbers coupled with many algorithmic enhancements means that much larger-scale data processing now is feasible. For instance computing discrete Fourier transforms.


Another new related feature is discrete convolution and correlation, which is useful for filtering, time series, and finite differencing. Below we import a JPEG image.



This applies the filter that approximates a derivative (or edge detection) to the resulting image.




There are a number of improvements to the high-level solvers of Mathematica. This is a sampling of some of these improvements.

NSolve now has greatly improved capability for solving systems of polynomial equations. It will deliver all solutions to such systems of equations and uses a direct Gröbner basis method as opposed to an iterative search method. For instance, suppose we have the following 4 by 4 system of polynomial equations.


Here we compute the solution using [Graphics:../Images/index_gr_63.gif] digits of precision.


As can be seen, the loss of precision is quite large, which is generally to be expected for polynomial equations, as this is the canonical example of a numerically sensitive problem.


FindMinimum has a number of robustness and performance improvements. In particular there is now a Levenberg--Marquardt method which is particularly suitable for nonlinear least-squares problems. For instance, the Rosenbrock function is a typical test problem for an optimization method. The problem is that a search method will have to find its way around the banana-like bend in the contour plot below.



The function Rosenbrock generalizes this problem to [Graphics:../Images/index_gr_69.gif] dimensions.


FindMinimum quickly finds the minimum value.


Other solvers such as FindRoot, Fit, NonlinearFit, Regress, and NonlinearRegress have similar improvements.

SparseLinearSolve is a method for solving, usually large, sparse linear equations. This is another example of a Developer` function.


The following generates a sparse matrix, which is represented in the form [Graphics:../Images/index_gr_74.gif] where only the elements [Graphics:../Images/index_gr_75.gif] that are nonzero are represented in the list.


This generates a 100,000 by 100,000 matrix and solves the resulting linear equations.


Converted by Mathematica      June 4, 2000

[Article Index] [Prev Page][Next Page]