The 
Mathematica Journal
Volume 9, Issue 4

Search

In This Issue
Articles
Tricks of the Trade
In and Out
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Download This Issue 

About the Journal
Editorial Policy
Staff and Contributors
Submissions
Subscriptions
Advertising
Back Issues
Contact Information

Moment-Based Density Approximants
Serge B. Provost

4. A Unified Methodology

Remark 3.1 suggests that the exact density function associated with a distribution whose first moments are known can be approximated by means of the product of a base density function, whose parameters are determined by matching moments, and a polynomial of degree , whose coefficients are obtained by making use of the method of moments as well. This general semiparametric approach to density approximation, which incidentally does not rely on orthogonal polynomials, is formally described in the following result.

Result 4.1 Let be the density function of a continuous random variable defined in the interval , , , where and , , , denote the density function of whose support is the interval , , and let the base density function , where is a positive normalizing constant, be an initial density approximant to with . Assuming that , uniquely define the distribution of , that exists for , and that whenever is a nontrivial function of , its tail behaviour is congruent to that of , the latter can be approximated by

with , where is an matrix whose ()th row is , . When depends on parameters, these are determined by equating to , . The corresponding density approximant for is then

This last formula can easily be coded as follows:

We now show that the polynomial coefficients can be determined by making use of the method of moments, that is, by equating the first moments obtained from to those of :

which is equivalent to , where

or , that is, , where is as defined in Result 4.1.

Remark 4.1 Note that it is not always necessary to transform the random variable . The transformation is, for instance, convenient for establishing that the proposed methodology yields density approximants identical to those obtained in terms of certain orthogonal polynomials. If there exists a base density function, , whose support is the interval and whose tail behaviour is congruent to that of when is a nontrivial function of , then its parameters can be determined by equating to for, and with , where is an matrix whose ()th row is , . Alternatively, in that case, one may set and in Result 4.1.

Connection to Approximants Expressed in Terms of Orthogonal Polynomials

We now show that the unified methodology described in Result 4.1 provides approximants that are mathematically equivalent to those obtained from orthogonal polynomials whose associated weight function is proportional to a base density function. In addition, an alternative representation of the 's is given in terms of , the moments of , and quantities characterizing the type of orthogonal polynomials corresponding to the selected base density function. A general representation of the coefficients in the linear combination of orthogonal polynomials specified by equation (36) is also derived.

Let be a set of orthogonal polynomials defined on the interval , which satisfy the following orthogonality property:

where is a weight function, and let be a normalizing constant such that (the base density function defined in Result 4.1) integrates to one over the interval . On noting that the orthogonal polynomials are linearly independent [14, Corollary 8.7), we can write equation (31) as

where, in light of equation (33) and the fact that orthogonal polynomials are linear combinations of powers of , the 's can be obtained from equating to for . This yields the following equalities:

Equivalently, we obtain

where is the coefficient of in or . Thus, by virtue of the orthogonality property of the 's specified by equation (35), we obtain the following general representation for the coefficients in equation (36):

and

Now, since and , it follows that the coefficients in equation (31) correspond to the expression in parentheses in the following representation of :

Now, letting , , , , and denoting by and the density functions of and corresponding to those of and , respectively, we can approximate whose support is the interval by

where or equivalently

which corresponds to the representation of given in equation (32).

It should be pointed out that Reinking proposed under a somewhat different setup general formulae for approximating density and distribution functions in terms of Laguerre, Jacobi, and Hermite polynomials [19]. Arguably, the approach proposed in Result 4.1 is not only conceptually simpler than that which is based on orthogonal polynomials, but it is also more general. The particular cases of density approximants expressed in terms of Laguerre, Legendre, Jacobi, and Hermite polynomials, which can all be equivalently obtained via Result 4.1, are individually considered in the remainder of this section.

Approximants Based on Laguerre Polynomials

Consider the approximants based on the Laguerre polynomials which were defined in Section 3. In that case, , , , , , , , is a orthogonal polynomial which is defined on the interval , and , as specified by equation (35), is equal to . It is then easily seen that the density expressions given in equations (42) and (28) coincide.

In this case, the base density function is that of a gamma random variable with parameters and 1. Note that after applying the transformation , the base density becomes a shifted gamma distribution with parameters and whose support is the interval .

Alternatively, we can obtain an identical density approximant by making use of Result 4.1, wherein is a gamma density function with parameters and 1, whose th moment which is needed to determine the 's, is given by , .

For instance, Laguerre series expansions were respectively derived by [20] and [21] for approximating the density functions of quadratic forms in normal variables and those of noncentral and random variables.

Approximants Based on Legendre Polynomials

First, we note that whenever the finite interval is mapped onto the interval , the requisite affine transformation is

with and .

Now, considering the approximants based on Legendre polynomials discussed in Section 2, which are defined on the interval , we have , , , , is a orthogonal polynomial, and . It is then easily seen that as given in equation (42) yields the density function of specified in equation (13). The same density approximant can also be obtained by making use of Result 4.1. In this case, ,

Approximants Based on Jacobi Polynomials

In order to approximate densities for which a beta density function is suitable as a base density function, we shall make use of the following alternative form of the Jacobi polynomials:

defined on the interval , wherein denotes an th-degree Jacobi polynomial in the variable with parameters and , , and . In this case, the support of , the random variable of interest, is the finite interval ; thus and in the linear transformation leading to equation (42). The associated weight function is so that , and the base density is that of a random variable, that is,

whose th moment is given by

The parameters Alpha and Beta can be determined as follows:

see [22, 44] and, in this case,

[19]. As illustrated in the next example, Result 4.1, as well as equations (42) and (43), yields identical density approximants.

Jacobi polynomial expansions were used by Durbin and Watson to approximate certain percentiles of their well-known test statistic [23].

Example 5

Example 1 is revisited by making use of a beta density as a base density function in Result 4.1 or equivalently by resorting to an approximant expressed in terms of Jacobi polynomials. Since the base density function already provides a good approximation in this case, fewer moments are required than in the case of the Legendre polynomial approximant (eight as opposed to 13) in order to achieve a similar degree of accuracy.

Figure 8. Exact and approximate (dashed line) PDFs. [S5 in the Appendix]

Example 6

Consider Wilks' likelihood ratio statistic, , where is the error sum of squares matrix and is the hypothesis sum of squares matrix, which is used for testing linear hypotheses on regression coefficients on the basis of -dimensional observation vectors. Assuming standard normal theory, and are independent Wishart matrices. As shown in [24, Section 8.4], when the mean vectors are assumed to be of the form , where with of dimension , , and the 's are given -dimensional vectors, , the th moment of the statistic for testing the null hypothesis that is a given matrix is

Clearly, the support of CapitalLambda, and thus that of U, is the interval . As pointed out by Mathai [25] who obtained a general representation of the exact density function of U as an inverse Mellin transform, a wide array of techniques were used to determine the density of U for some particular values of , , and . Result 4.1 provides a means of obtaining approximants that can be viewed as exact for all intents and purposes. In any case, many of the so-called exact representations available in the literature are expressed in terms of integrals which have to be evaluated numerically or in terms of infinite series that have to be truncated.

Figures 9 and 10 show the approximate density and distribution functions of U for , , , and when one lets (dashed line) and (solid line). Mathai [25] determined that for , , and , the 95th and 99th percentiles of the distribution are respectively 0.87825 and 0.94719. As evaluated in the Appendix, and , where denotes a percentile corresponding to the point , which is approximated on the basis of moments. It is seen from the graph that an adequate approximant can be obtained from two moments in this case. In fact, and .

Figure 9. PDF of U for (dashed line) and . [P6 in the Appendix]

Figure 10. CDF of U for (dashed line) and . [C6 in the Appendix]

Approximants Based on Hermite Polynomials

We can approximate densities whose tail behaviour is congruent to that of a normal density function by means of the modified Hermite polynomials given by

and defined on the interval , where denotes a th-degree Hermite polynomial in the variable in Mathematica notation. The weight function associated with modified Hermite polynomials which is is proportional to the density function of a standard Gaussian random variable. Thus, the requisite transformation is with and , the normalizing constant is , and the base density function is that of a standard normal random variable, that is,

whose th moment is given by

Moreover, in this case,

The same density approximants are obtained whether one makes use of equation (32) or (42). They are also known as (type-A) Gram-Charlier expansions. A methodology for determining the Hermite polynomial coefficients in the expansions in terms of the moments is presented in [3, Section 5.4], where the advantages and drawbacks of using such approximations are also discussed. Conditions ensuring the convergence of the approximants are available from Cramér [26].

Example 7

Consider an equally weighted mixture of two Gaussian distributions with parameters and , whose density function is plotted as a solid line in Figure 11. On the basis of 30 moments, identical density approximants (represented by a dashed line) are obtained whether we make use of equation (42) or Result 4.1.

Figure 11. Exact and approximate (dashed line) PDFs. [S7, S71, or S7a in the Appendix]



     
About Mathematica | Download Mathematica Player
© Wolfram Media, Inc. All rights reserved.