This post continues the preceding post on maximum likelhood estimation. The preceding post focuses on calculating MLE when there is complete data (or individual data). This post focuses on calculating MLE for the other data scenarios such as grouped data, censored and truncated data.
Individual data refers to a data set where the exact value of every data point in the data set is completely known. Grouped data refers to a summarized data set that consists of frequency data, i.e. the counts that fall into a set of intervals.
Censored data refers to a data set where information on some of the data points is only partially known. For example, a data point exceeding a limit is recorded as (the data point is right censored or censored from above). A data point lower than a limit is recorded as (the data point is left censored or censored from below). A handy example of a censored data set is a reliability study where the times at failure for machines are recorded during a 5year period. In this study, the time at failure for any machine that is still operating at the end of the study is recorded as 5 even though the machine may continue to work for a number of more years.
Truncated data refers to a data set where data values in some intervals are not observed and are thus ignored. For example, in an insurance coverage with a deductible , when considering payment data, any loss that is below is not included into the calculation. This is an example of a data set that is truncated below. Any data set such that values above a certain threshold are not observed or collected is truncated above.
For censored data and truncated data, we focus on claim data with a policy limit (censored from above) or on claim data with a deductible (truncated from below) or on claim data with both a policy limit and a deductible.
Several examples (Example 3, Example 4, Example 6 and Example 7) concern the Pareto distribution. The Pareto distribution used here is also called Pareto type II distribution. For useful facts about Pareto type II, see this post in a companion blog.
Grouped Data
In this scenario, the data points are not available individually. Instead, we know the counts of the data points that fall into a set of intervals. Unlike the case for complete data, the likelihood is not the value of the density function. It is the difference of two values of the cumulative distribution function (CDF) to account for the probability of a data point falling into an interval. The rest of the procedure is the same as before – finding the likelihood function, and then taking log to get the loglikelihood function. Then take the derivative or partial derivatives and set the derivative or partial derivative equal to zero. The maximum likelihood estimates are then the solutions of the resulting equations. This is illustrated in Example 1.
Example 1
The following claim data has been collected from a large of insureds.
Interval 
Frequency 

10 

2 

6 

1 

1 
Total 
20 
The exponential distribution with mean is fitted to the grouped data. Calculate the maximum likelihood estimate of the parameter .
Note that the density is . The CDF is .
Any observation that falls into the interval (0, 5) has likelihood , which is , accounting for the probability of an observation being in that interval. The likelihood for the interval (0, 5) is then . Any observation that falls into the interval (5, 10) has likelihood , which is . The likelihood for the interval is . Continue on with the same process. The likelihood function is the product of the likelihood of the intervals.
The likelihood function can be further simplified before obtaining the loglikelihood function.
Solving the equation obtained by setting the derivative of the loglikelihood function equal to zero gives the maximum likelihood estimate.
The most obvious difference with the case of individual data MLE is that the likelihood function is made up of product of differences of values of the CDF. Otherwise, the same process applies. For some distributions, the maximum likelihood estimate is hard to do for grouped data because of the CDF being hard to manipulate mathematically. It is also the case the method of moments is also difficult to carry out for grouped data.
Censored Data
An example of censored data would be an insurance coverage with a policy limit. Any loss exceeding the limit is considered the value of . The likelihood of this data point is then , the probability of a data point exceeding . The rest of the MLE procedure works the same as before. To contrast, if the censored data point is below a threshold , then the likelihood of the data point is .
Example 2
Observed claims are: 5, 6, 9, 15, 23. In addition, there are two claims exceeding the policy limit of 25.
An exponential distribution with mean is fitted to the claim data. Calculate the maximum likelihood estimate of .
For the individual data points, the likelihood is . For the censored data points, the likelihood is , the probability of exceeding the limit. The following is the likelihood function.
The following derivation gives the maximum likelihood estimate .
Truncated Data
We center the discussion on the scenario of a coverage with a deductible. Truncation is due to the fact that payment on a claim is conditional on the loss exceeding the deductible. Suppose that the insurance coverage has a deductible . Suppose that claims have been observed (individual data). We assume that losses below are not submitted. So all observations are above the deductible . There are two ways to applying maximum likelihood estimation to such truncated claim data.
 Work with the claim data as is without any modification. Then the resulting maximum likelihood fitted distribution would be for claim data before applying any deductible. The mean of this fitted distribution would be the mean claim cost without a deductible. Of course, we can then estimate from this fitted distribution the claim cost of imposing a deductible.
 This approach is called shifting since the approach is to subtract the deductible from each observed claims . The resulting maximum likelihood fitted distribution would be for the claim payment reflecting a deductible of . The mean of this fitted distribution would be the mean claim cost per payment (over all losses exceeding the deductible of ). For this reason, the original mean claim cost (without a deductible) cannt be recovered from this fitted distribution. However, imposing a deductible of to this fitted distribution would be equivalent to imposing a deductible of to the original loss distribution.
Essentially in approach 1, we fit a distribution to the truncated claim data (but unmodified by the deductible). The resulting maximum likelihood fitted distribution is for the original loss distribution before any deductible being applied. In the second approach we fit a distribution to the claim payment data (after shifting a deductible from the data). The resulting maximum likelihood fitted distribution is for the claim payment distribution reflecting the deductible used in the shifting. Which approach to use depends on whether we want to fit a distribution to the truncated claim data including the deductible or fit a distribution to the claim payment data (with the deductible not included).
To illustrate how these two approaches work, we fit the Pareto distribution to a set of claim data in both ways (Example 3 and Example 4). We round out the discussion on truncated data with an example using exponential distribution (Example 5).
Example 3
An insurance coverage has a deductible of 5. The following claims are observed:
A Pareto distribution with parameters and is fitted to these data. Determine the maximum likelihood estimate of . We wish that the fitted Pareto distribution is an estimated model for claim cost before the deductible. So we do not subtract the deductible of 5 from the data points (i.e. approach 1). We discuss several ways of using this fitted Pareto distribution to estimate claim costs.
The density function and the CDF of the Pareto distribution are:
Because we assume that we do not have any information about claims below 5, observing a claim is conditional on the fact that the loss underlying that claim exceeds 5. Thus the likelihood of a claim is a conditional probability. The likelihood of a claim amount is . Plugging in the Pareto information, the following is the likelihood of a claim .
There are 5 data points. The likelihood function is then the product of these 5 likelihood values.
The usual steps produce the maximum likelihood estimate for .
The Pareto distribution with and is the fitted distribution for claim data in this insurance coverage. The deductible of 5 is not factored into this Pareto distribution. So this is the fitted distribution for the claim cost before applying the deductible. Thus, the mean claim cost without any deductible is . Solving the equation gives the median. Thus, the median claim cost without any deductible is 4.0739. When imposing a deductible of 5, here’s the estimated claim costs:
Limited Expected value………..
Claim Cost Per Loss……………….
Claim Cost Per Payment………..
When imposing a deductible of 10, here’s the estimated claim costs based on the fitted Pareto distribution.
Limited Expected value………..
Claim Cost Per Loss……………….
Claim Cost Per Payment………..
The claim cost without a deductible is over all losses. When imposing a deductible of 5, the claim cost per loss is reduced to 3.9635. When imposing a deductible of 10, the claim cost per loss is further reduced to 2.4056. Note that the claim costs per payment are conditional means (calculated over all losses exceeding the deductible). So they are higher than the claim cost per loss.
Example 4
We now show how to estimate MLE using the second approach for truncated data. We continue to use the claim data from Example 3. We still wish to fit a Pareto distribution with parameters and to the same data. This time we subtract the deductible of 5 from the claims. The resulting fitted Pareto distribution is for the distribution of claim payments based on the deductible of 5.
After subtracting the deductible of 5, the data are: 7, 3, 9, 12, 8. The maximum likelihood estimation is based on this shifted data. This data set is a complete data set. We can use the formula shown in the preceding post.
The Pareto distribution with parameters and is the fitted distribution for claim payments. The deductible of 5 is baked into this Pareto distribution. The mean of this distribution is . This mean is the mean claim payment with a deductible of 5 baked in. So we cannot recover the claim cost without deductible from this fitted distribution. This fitted Pareto distribution is modified from the original Pareto distribution describing the losses without the deductible. If we impost a deductible of 5 to this modified distribution, the result would be equivalent to imposing a deductible of 10 to the original distribution.
Limited Expected value………..
Claim Cost Per Payment………..
The mean claim cost of 11.9594 is equivalent to the mean claim cost when imposing a deductible of 10 to the claim data before the deductible. Note that 11.9594 is in line with the equivalent number of 10.9541 in Example 3. The two answers may be equivalent but they usually do not equate exactly.
Example 5
This example deals with the same coverage and same claim data as in Example 3. This time we fit the exponential distribution with mean to the claim data. We apply the maximum likelihood estimation using the first approach (without subtracting the deductible from the claim data). Observing a claim is conditional on it exceeding the deductible 5. The likelihood of a claim is
Thus the likelihood function is:
The maximum likelihood estimate is derived as follows:
On careful examination, note that if we use the shifted approach (the second approach) on the exponential distribution, we get the same maximum likelihood estimate . Because the exponential distribution is memoryless, either approach for truncated data leads to the same likelihood function . The exponential distribution is the only case where the maximum likelihood fitted distribution is both for claim data without a deductible and for claim payment with a deductible. Any other distribution would lead to two different fitted distributions when using both approaches for truncated claim data (just like the Pareto distribution in Example 3 and Example 4).
One comment about the two approaches. If there are two approaches in handling truncated claim data, how do we know which approach to use in an exam problem? The answer depends on the goal of the problem. If the goal is to generate a fitted distribution to answer questions about the loss distribution or the claim data before applying any deductible, the first approach is used. Possible wordings: applying MLE on the original claim data, the fitted distribution is the loss distribution, or the loss distribution is fitted to a distribution.
If the goal is to generate a fitted distribution to answer questions about claim payment reflecting a certain deductible, then use approach 2 by shifting a number from the claim data. Possible wordings: shifting the data by some amount, a certain distribution is fitted to the claim payment data, or claim payment data is fitted to this certain distribution. The idea is that we should look for instruction in the problem.
Censoring and Truncation Combined
We can also apply maximum likelihood estimation on claim data arising from insurance coverage with both a deductible and a policy limit. The addition of the policy limit poses no new challenge. The deductible is already taken care of by the two approaches discussed in the preceding section. The only new piece of information we need is on how to handle the censored limit. Any data point that is above the maximum covered loss is represented as . Its likelihood is one of the following depending on the approach.
Approach 1………..
Approach 2………..
In Approach 1, the denominator is indicating that the likelihood is a conditional probability. The numerator is indicating that the original data point is not known but is above the limit . In Approach 2, we use the limit to stand in for the actual data point but subtract the deductible from it to make the claim payment.
For any individual data point in the claim data (any data point above the deductible and below the limit), the likelihood has already been described in the preceding section (in one of two approaches). We now close with two more examples demonstrating combining truncation and censoring.
Example 6
An insurance coverage has a deductible of 5 and a maximum covered loss of 25. The following claims are observed:
12, 8, 14, 17, 13, 25*, 25*
The first 5 data points are individual data, the same data set found in Example 3. The last two claims with asterisk are claims that exceed 25 and are recorded as 25. Just like Example 3, we fit the Pareto distribution with parameters and to these data in order to estimate the claim cost without a deductible.
For the 2 data points 25, the following is the likelihood:
The individual data points are the same as in Example 3. We only need to multiply the above likelihood (two times) to the in Example 3.
The usual steps produce the maximum likelihood estimate for .
The fitted Pareto distribution with parameters and is a distribution to the claim cost without a deductible.
Example 7
Use the same data set in Example 6 but use the shifting approach (the second approach described in the preceding section. The fitted Pareto distribution will be a model for claim payments for the insurance coverage with a deductible of 5.
For the two data points of 25, the likelihood is . The likelihood function is obtained by multiply this likelihood (two times) with the likelihood of the individual data points.
The usual steps produce the maximum likelihood estimate for .
The fitted Pareto distribution with parameters and is a distribution to the claim payment after a deductible of 5 is met.
actuarial practice problems
Dan Ma actuarial
Daniel Ma actuarial
Daniel Ma Math
Daniel Ma Mathematics
Actuarial exam
2018 – Dan Ma