I found the following R codes to fit a Tweedie Compound Poisson Gamma distribution. I have to fit it to my 399 claim amounts. I have seen the following R codes ptweedie.series(q, power, mu, phi)
and dtweedie.series(y, power, mu, phi)
. However I fail to understand the codes fully and after importing my data into R, how to proceed? Thanks in advance.
R codes for Tweedie Compound Poisson Gamma
7.8k Views Asked by user3309969 At
1
First a note: importing your dataset from the comments above yielded 398 claims, not 399. One of these was 4 orders of magnitude larger than the median claim. So I suspect a typo. In the analysis that follows I excluded that sample, leaving 397.
A quick look at the Wikipedia entry for Tweedie Distributions reveals that this is actually a family of exponential distributions distinguished by the
power
parameter (xi
in the R documentation). Power=1 yields the Poisson distribution, power=2 yields the Gamma distribution, power=3 yields the inverse Gaussian distribution, and so on. The Tweedie distributions are also defined for non-integer power. The parameter mu is the mean, and phi is a dispersion parameter, related to variance.So the basic question, as I understand it, is which combination of power, mu, and phi yield a distribution which best fits your claims data?
One way to assess whether a distribution fits a sample is the Q-Q plot. This plots quantiles of your sample vs. quantiles of the test distribution. If the sample is distributed as the test distribution, the Q-Q plot should be a straight line. In R code (and with
X
as your vector of samples):Both the Gamma and the Inverse Gaussian distributions explain your data up to claims of ~40,000. The Gamma distribution underestimates the frequency of larger claims, while the Inverse Gaussian distribution overestimates their frequency. So let's try power=2.5.
So your claims data seems to follow a tweedie distribution with power=2.5. The next step is to estimate mu and phi, given power=2.5. This is a non-linear optimization problem in 2 dimensions, so we use package
nloptr
. It turns out that convergence depends on having starting parameters relatively close the the optimal values, so there is a fair amount of trial and error to getnlopt(...)
to converge.Finally, we confirm that the solution does indeed fit the data well.