Volume 9, Issue 2

Articles
In and Out
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Editorial Policy
Staff
Submissions
Subscriptions
Back Issues
Contact Information

Expectation Value of the Radioactive Decay Constant

Andreas Dieckmann

In radioactive decay the probability of the decay of a single particle after a time is given by an exponential distribution with being the mean lifetime characteristic of the kind of particle under consideration. Alternatively, the probability may be expressed by using the decay constant : . Imagine that particles decay at times , . The probability of a given set of times can be determined from the likelihood function,

or, in Mathematica notation,

The "best" estimator (calculated from the measured times) of the true is then found according to maximum likelihood (ML) theory: equate the derivative at to zero.

Then solve for .

The result is that is the arithmetic mean of the .

To get the expectation value for this statistic, , compute the integral

Since

we have that

The factor in equation (2) can be obtained by parametric differentiation of equation (3) with respect to .

Hence

So the prescription of calculating the value of from the measured times is a good one which is, of course, well known and found in many textbooks [2].

Similarly, it is easy to show that the ML estimator of the decay constant, , is simply .

To compute the expectation value of ,

we again use parametric differentiation,

Computing the product of integrals is straightforward.

Integrating yields .

Similarly, to compute the expectation value of ,

parametric differentiation with respect to twice yields

Integrating twice yields .

Now we can compute the variance of ,

The -dimensional integral in equation (5) represents the expectation value for the estimator of the radioactive decay constant, calculated from measurements. Scaling the variables in equation (5), , we see that

From this we obtain the following identity, which is independent of ,

To check this result for , instead of computing the integral directly via numeric integration, create exponentially distributed random numbers, , and calculate the numerical averages of .

Plotting this data () and comparing with the analytical expression , we see that the agreement is excellent.

A general result for a wide class of related integrals involving the kth power of is easily obtained. Since

integrating the product of such terms yields

Integrating both sides with respect to , times, and using induction on

we find the following formula (using ),

valid for , , . This result holds for nonintegral and under weaker conditions on , , , and . For example, with ,

Scaling the variables in equation (13), , is equivalent to putting , that is

Noting that

where is the generalized regularized incomplete gamma function, we can test equation (14) for with random numbers distributed according to the inverse of , and calculating the numerical averages of for a range of .

Generating inverse values of this distribution is time consuming, so we save the computed values using dynamic programming.

Use Partition, Tr (twice), and Length to compute the averages efficiently. Here are the results for and .

Plotting this data () and comparing with the analytical expression , again we see that the agreement is excellent.