The 
Mathematica Journal
Volume 9, Issue 4

Search

In This Issue
Articles
Tricks of the Trade
In and Out
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Download This Issue 

About the Journal
Editorial Policy
Staff and Contributors
Submissions
Subscriptions
Advertising
Back Issues
Contact Information

Skew Densities and Ensemble Inference for Financial Economics
H. D. Vinod

4. Maximum Likelihood Estimation

Next, we activate mathStatica's Superlog function for subsequent calculation of the log likelihood for .

Now we are ready to define logLTheta, the log of the likelihood function, from a product of densities over observations.

The ML estimates of the three parameters Xi, Lambda, and require differentiation of the logLTheta, leading to three so-called score functions.

These equations are too complicated to analytically obtain the ML estimators by setting the scores equal to zero. We have to use numerical solution methods on our data.

The observed log likelihood is obtained by simply inserting the data values in logLTheta.

We then use the FindMaximum function to numerically maximize the observed log likelihood, as follows:

In the preceding output, the first component of the solution is the value of the maximized likelihood. The second component is solLeftDoubleBracket2RightDoubleBracket and represents the ML solution for the three parameters. Mathematica's warning message suggests that we should try other methods and starting points. We will try another search, now using Method->Gradient and slightly different starting points.

The new output in sol1 is already an improvement. But is it the maximum? By the first-order conditions, the gradients of the observed logLTheta evaluated at sol1LeftDoubleBracket2RightDoubleBracket should be zero.

These values of the gradient clearly are not close enough to zero. Similarly, the second-order conditions for a maximum require that the matrix of second-order partials be negative definite. Hessian calculates this matrix, and Eigenvalues calculates its eigenvalues as follows:

Unfortunately, the last eigenvalue, although small, does not ensure negative definiteness. The issue is whether the observed log likelihood is concave near the solution. McCullough and Vinod [7, 8] recommend searching near all ML solutions to be certain. Fortunately, Mathematica makes these searches relatively easy to implement by specifying an alternative method. Hence we try the Newton estimation method.

Is this solution (denoted by sol2) superior to the first one? The maximized value of the log likelihood has increased from to . So the solution has clearly improved. The solution for the location parameter Xi is not comparable to the sample mean 0.7387 for our mutual fund, because of the scale change by . The comparable value is obtained as:

To determine if sol2 is good enough we study its properties as before.

This solution has the desirable property that the gradients are very close to zero. The shape of the log likelihood near this second ML solution is concave, since the Hessian is negative definite. For a graphical assessment of the goodness-of-fit of the model to the data, we plot the data (solid line) and the fitted location-scale SN density (dashed line) side by side in Figure 3.

Figure 3.

Since this fit is visually close, we will accept sol2 as our solution. The next step is to consider the statistical inference for the Newton solution of the three parameters. It is well known that the negative of the inverse of the Hessian matrix evaluated at the solution provides the asymptotic variance-covariance matrix of the ML parameter estimates.

The standard errors of the ML estimates are given by the diagonals of the preceding matrix. Hence we write student's t statistics as:

Note that the t statistics are scale-free and suggest statistical significance of the three parameters at conventional confidence levels. We conclude that this mutual fund yields statistically significant positive return even after allowing for negative skewness in the data. The ML estimate of the skewness parameter, Lambda is , is also of interest in financial economics in the following sense. Note that except for location and scale change, the density for our fund is close to the dotted line in Figure 3. If there are two funds with identical mean and variance but distinct skewnesses, then the investor would be better off choosing the fund with a larger (more positive) skewness.

The asymptotic inference may not be satisfactory in relatively small samples in finance. In future work we expect to develop a new maximum entropy algorithm (ME-alg) for the time series inference suggested in [6] for such inference.



     
About Mathematica | Download Mathematica Player
© Wolfram Media, Inc. All rights reserved.