The Mathematica Journal
Feature Articles
New Products
New Publications
News Bulletins
Write Us
About the Journal
Staff and Contributors
Back Issues
Download this Issue


In symbolic or algebraic computing we also focus on performance, but the overriding concern is frequently to get an algorithm in the first place; that is, there are many problems with no known algorithmic solution. So in many cases increasing the scope (or range) of problems that Mathematica can handle is very much at the top of the agenda.


The default assumption in Mathematica is that every variable can be an arbitrary complex number. Well, there are many results that depend on the fact that you can assume certain constraints on variables and thus get simpler, but more specialized, results. A typical example is the following.


It cannot simplify to [Graphics:../Images/index_gr_85.gif] or [Graphics:../Images/index_gr_86.gif] under the assumption that [Graphics:../Images/index_gr_87.gif] can be an arbitrary complex number as the following evaluations show.


For [Graphics:../Images/index_gr_90.gif] however it can simplify further.


For [Graphics:../Images/index_gr_93.gif], which implicitly also assumes [Graphics:../Images/index_gr_94.gif], we get an even simpler form.


In general, you can use any logical combination of equations, inequalities, and domain equations. Domain equations are of the form [Graphics:../Images/index_gr_97.gif] or [Graphics:../Images/index_gr_98.gif], and they behave very much like regular equations. If they can evaluate immediately they do so; otherwise they stay unevaluated.

This is an example where the domain equation and an equational analog can immediately evaluate to True.


These are some more examples that evaluate immediately.


Here, however, is a case where it is not known in mathematics whether the statement is true or not.


If you use a variable, then the domain equation remains unevaluated, just as is the case for regular equations.


The current set of domains is given below.


With the addition of domain equations one can express a wide variety of constraints or assumptions. These are oriented towards number theory, the first one being Fermat's little theorem.


Or this is a structural property of a number theoretic function.


Or one can express properties of functions.


The last example demonstrates another feature, which is being able to simplify equations and inequalities. When there are algebraic (including polynomial) equations and inequalities, and the domains are real or complex, then the methods involved are also decision procedures based on Gröbner basis and cylindrical algebraic decomposition respectively. This is, for example, the well-known inequality between geometric and arithmetic means.


This is a slightly harder example that was posed to a news group.



One of the technologies needed to build a rigorous assumptions mechanism was the ability to manipulate inequalities. In particular we have implemented cylindrical algebraic decomposition (CAD). It computes a triangular or nested structure of the set of inequalities. This is an example of an Experimental` context feature. These are features that represent previews of future functionality, but whose interface may change in future versions of Mathematica.


The result is a union (Or) of cells that have a triangular or cylindric structure of the form:


This is essentially the analog of Gaussian elimination for polynomial inequalities. And just as almost any problem in linear algebra can ultimately be solved using Gaussian elimination (sometimes as part of other algorithms) one can also solve almost any problem in real polynomial algebra using CAD or variants thereof.

One of the simplest examples is the CAD of a disk, as seen below.


CAD is used in a number of the simplification functions, but it is also the underpinning of another class of solvers. For instance, solving systems of algebraic inequalities.


Or exact minimization over regions defined by systems of algebraic inequalities.


Or quantifier elimination. This gives the conditions on the coefficients in a polynomial for it to be positive.


There are also future enhancements in the works that are ultimately based on the CAD algorithm.


A number of integral and sum transforms have also been reimplemented and moved into the kernel. These transforms are frequently very useful when solving differential and difference equations. They also have direct interpretations in many disciplines such as signal and image processing, control, and physics.

This is, for instance, the one-sided Laplace transform.


Similarly, the sum transform version of the Laplace transform is the Z transform.


A variant of the Z transform is also called a generating function which is often used in combinatorics.


Similarly, a simple variant of the inverse Z transform gets the sequence back.


Similarly, FourierTransform, FourierSinTransform, and FourierCosTransform have been reimplemented, and along with that support has been added for generalized functions such as DiracDelta and UnitStep. This is a typical example resulting in a Dirac delta function in the output.



We have also been adding to the already vast knowledge base of special functions in Mathematica 4. Below you will find a sampling of some of these additions.

The fundamental reason we add special functions is that they are useful containers of specialized knowledge about functions. The knowledge of special functions built into Mathematica encompasses a lot more than numerical evaluation across the complex plane (which in itself can be a challenge). In particular this typically includes special symbolic values, derivatives and series expansions, and also how they can be used in integration and solution of differential equations. Mathematica represents a live and usable version of the vast knowledge of special functions. We have also been busy working on a vast web encyclopedia of special functions and their properties; see for a preview.

In Mathematica 4 we have been adding to all aspects of the special functions knowledge base. There are new special functions (e.g., harmonic numbers, bivariate hypergeometric), generalized functions (e.g., Dirac delta, UnitStep) as well as number theoretic functions (e.g., the Carmichael lambda function).

The harmonic numbers are the discrete or sequence version of a logarithm, and frequently come up in summation problems.


These are the harmonic numbers of order [Graphics:../Images/index_gr_152.gif].


Appell's F1 function is an example of a bivariate hypergeometric function.


Some additional special functions include Nielsen's generalized polylogarithm and Struve functions. In many cases there is more knowledge built into Mathematica about the existing functions, for instance, additional simplifiers for Bessel, Fibonacci, gamma, harmonic number, hypergeometric, polygamma, polylogarithms, and zeta functions. A simple example is the following.


This is a conversion from trigonometric to radical expressions.


A new class of functions is the generalized functions such as Dirac's delta function and the unit step or Heaviside function, as well as sequence versions of these such as Kronecker's delta and the discrete delta function.

A Dirac delta function essentially samples a function at a point.


In this case the function that is being sampled is equal to 1 everywhere, and the integral will sample this function at every point where the argument to DiracDelta is zero. So in effect we count the number of zeros of [Graphics:../Images/index_gr_163.gif] in the interval [Graphics:../Images/index_gr_164.gif].


A function related to the Dirac delta function is the unit step or Heaviside function.


This uses the unit step function to produce a square wave.



There are also new number theoretic functions such as the Carmichael lambda function and the multiplicative order function.

Converted by Mathematica      June 4, 2000

[Article Index] [Prev Page][Next Page]