The 
Mathematica Journal
Volume 9, Issue 1

Search

In This Issue
Tricks of the Trade
In and Out
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Download This Issue 

About the journal
Editorial Policy
Staff
Submissions
Subscriptions
Advertising
Back Issues
Contact Information

What's New in Mathematica 5
The Mathematica 5 Product Team

Performance Enhancements

Fast Dense Numerical Linear Algebra

Dense numerical linear algebra is an important building block for most of Mathematica’s numerical analysis functionality, ranging from data analysis over matrix operations to numerical differential equation solvers to graphics. The performance increases in Mathematica 5 affect basically every aspect of Mathematica. On modern microprocessors, Mathematica 5 now offers class-leading performance for these kinds of operations, operating on par with Fortran or MATLAB code.

Here are some examples of speedups between Mathematica 4.2 and Mathematica 5.

Mathematica 5 also takes advantage of multiple processors for some numerical operations (e.g., dot product) on operating systems such as Linux, Windows, and HP-UX that support this multithreaded capability.

High-Speed Sparse Linear Algebra

A large number of real-world problems deal with sparse matrices, that is, matrices in which most of the elements are zero. Examples of operations that involve sparse matrices include solving ordinary and partial differential equations, optimization problems, and large-scale simulations. Mathematica 5’s implementation of sparse linear algebra is unique in that it allows for arrays of any dimension (or rank) and is fully integrated with the rest of the Mathematica system. The performance of basic sparse linear algebra operations is on par with, or better than, those in dedicated numerical systems.

Creating Sparse Arrays and Converting between Sparse and Dense Arrays

Adding the head SparseArray forces the creation of a sparse array object.

Applying Normal to a sparse array object gives the corresponding dense array.

Operations on Sparse Matrices

The sparse array data structure and the specialized sparse algorithms allow Mathematica users to do operations on very large matrices. The following matrix has elements.

Notice the big difference in size between sparse and dense representation. The dense representation requires eight terabytes of memory for storage, while the sparse array object needs only a bit over 40 megabytes.

Operations on sparse arrays are extremely fast. Notice that LinearSolve correctly detects that it has been given a sparse array object.

We just solved an equation system with 1,000,000 equations and 1,000,000 variables. Inverting a large sparse matrix is also very fast.

Large-Scale Linear Programming

Mathematica is now optimized for solving large-scale linear programming problems. It uses an efficient interior-point method. Similar functionality has until now been only available in expensive special-purpose packages.

Example: Solving a Standard Test Problem

The following Import command loads a standard linear programming test problem that comes with Mathematica 5.

The 80bau3b problem has about 2,000 constraints and about 10,000 variables.

This solves the linear programming problem and returns the optimizing argument.

This is the optimal value.

This problem could not have been solved in Mathematica 4.2 because a dense form of the data would have been used. The memory requirements and processing time for such an operation would have made the problem infeasible to solve.

Example: Solving Another Standard Test Problem

This is an example of solving a linear programming problem with 232,000 variables and 10,000 inequalities.

As in the previous example, here is the optimal value.

Big-Number Arithmetic

Mathematica has been able to handle numbers with millions of digits for quite some time, but more efficient implementations and better algorithms have improved the performance by a factor of up to three for numbers with less than one thousand digits. For numbers with millions of digits, performance improvements are even higher when an asymptotically more efficient algorithm is used.

The big-number performance in Mathematica 5 is on par with, or faster than, the performance of special-purpose libraries and is unchallenged for general computation systems.

64-Bit Platform Support

Users running increasingly larger computations and applications can now access nearly a million terabytes (1 terabyte = 1,024 gigabytes), overcoming the 4-gigabyte address ceiling in 32-bit systems like the current Intel IA-32-bit architecture.

The combination of a 64-bit address space and fast numerics, part of Wolfram Research’s gigaNumerics initiative, lets Mathematica users solve very large problems.

Mathematica 5 is optimized for a large number of 64-bit CPUs and operating systems, including Sun Solaris for UltraSPARC, HP-UX for PA-RISC, IBM AIX for the Power architecture, HP Tru64 Unix on Alpha, and Linux on Alpha. The two main benefits are the ability to solve vastly larger problems than on 32-bit platforms and speed increases for big-number arithmetic due to the 64-bit word length.

This 64-bit optimization will also let Mathematica users take full advantage of planned performance increases in future versions of these 64-bit processors.

Faster MathLink

MathLink uses TCP/IP devices for communications between parts of Mathematica, such as the front end and the kernel, as well as for the primary means of communication between multiple Mathematica kernels, such as in gridMathematica clusters.

A new TCP/IP protocol allows Mathematica 5 to communicate at the speed of the underlying network. On a standard 100Base-T network, bandwidth improves by a factor of 10 and latency by a factor of 200. On faster networks, such as Gigabit networks and crossbars found in leading-edge computing centers, the gains are even higher. Additional improvements—for example, a new SharedMemory protocol for Windows platforms—give additional 10-fold improvement for communication between parts of Mathematica on the same machine.

Because all MathLink bindings (J/Link, .NET/Link, and almost all Import / Export formats) rely on MathLink, these will also improve as a result of improvements in MathLink.



     
About Mathematica  Download Mathematica Player
Copyright © Wolfram Media, Inc. All rights reserved.