Making the uninsurable insurable
What the math of extreme risk can tell us about the insurance availability crisis, and what to do about it.
Are all risks insurable for the right price? While in some cases the premium required to cover a risk might be more than the insured can afford, it does not mean, strictly speaking, it is an uninsurable risk. For example, an auto insurance policy for a teen driver with multiple prior accidents might be very expensive, but there is a market for this coverage because insurance companies believe that they can quantify the average (or expected) loss1 and someone will buy it at that price.
But what about a truly uninsurable risk, for which no company can potentially profitably offer insurance? If an insurance premium is less than the expected loss loaded for expenses and profit, then ruin of the insurance company is certain over a long enough time horizon. For a risk to be insurable, its expected loss must be able to be statistically quantified. This condition is not always satisfied. With climate change making catastrophe risk both more common and more uncertain, the insurance industry is in danger of seeing more expected losses that are so large they can’t be quantified. This may result in a crisis of uninsurability.2 To avert this looming crisis, we must first understand how and why expected losses become unquantifiable. This involves a statistical field called Extreme Value Theory, which can teach us how to think about extreme events.
Extreme Value Theory and insurance: A primer
Insurance is said to follow the “law of large numbers,” meaning that, while individual losses may be highly uncertain, the expected loss can be calculated. Auto insurance follows this pattern. Auto collision claims are fairly predictable in aggregate over time. On average, the total losses an auto insurance company with a large and stable book of business can expect over the course of a year does not vary much.
Extreme or “tail” risk, on the other hand, deals with the rare and unexpected. Already, we have a data problem: these events are definitionally hard to understand or study, because they don’t happen very often. Good examples of these kind of events include the Northridge earthquake of 1994 or Hurricane Katrina.
But even these extreme events follow statistical laws. The statistical theory that describes them, Extreme Value Theory, was developed in the early to mid-20th century. Extreme Value Theory divides losses into light-tailed losses, which have some upper limit, and heavy-tailed losses, for which there is no limit. Non-catastrophic perils, such as auto physical damage, are light-tailed. Catastrophic perils, such as hurricane, flood, and wildfire, are heavy-tailed.
Not all heavy-tailed perils are the same, though. The mathematician Benoit Mandelbrot, famous for his invention of fractals, also described several kinds of randomness. First, he described “mild” randomness. This is the world of the law of large numbers, defined by averages and their variances. “Variance” describes the degree of variability from the mean, or average. In other words, although there may be large variations from year to year, those variations are still quantifiable. Many catastrophic losses fall within the boundaries of mild randomness. Even though there is no upper limit to the possible losses, their average and variance can be estimated.
Next comes “wild” randomness. Although the average still exists, the variance becomes infinite. An insurer that sets a premium based only on the average losses and expenses will eventually become insolvent because of some extreme event, however unlikely. To reduce this risk they may apply a risk load based on the variance of the losses, or they may purchase reinsurance, the price of which depends upon the variance in the losses. In most pricing models, the greater the variance, the greater the risk load required in order to be able to weather rare extreme events. Here we find a conundrum: under wild randomness, which features an infinite variance in losses, premiums loaded with a variance-based risk load would also be infinite—which is to say, uninsurable at any price. Increasing the amount of capital an insurance company has would increase its expected time to ruin, but no company with limited capital, however great, could withstand forever the wild swings in losses resulting from infinite variance.
This might seem to be the most extreme situation, but it isn’t. Beyond even wild randomness is “extreme” randomness. In this case not only is the variance of the losses infinite, but so is the average loss.3 This is the ultimate uninsurable risk—a risk where it’s not just a rare event that could cause ruin for the insurance company, but where the expected loss is infinite. Even a company with infinite capital would, on average, lose money insuring such a risk. Such a risk truly is uninsurable.
Uninsurability and insurance markets
These categorizations of randomness are not just theoretical. When large events lead insurers to reevaluate how heavy the tails of particular lines of business are it can lead to an insurance availability crisis. In the aftermath of the 1994 Northridge earthquake, insurers concluded that they had underestimated how heavy-tailed California earthquake losses are. “By January of 1995, companies representing 93 percent of the California homeowners insurance market had either restricted or stopped writing homeowners policies altogether, sending the California housing market into a tailspin.”4 This crisis only ended when California formed the California Earthquake Authority (CEA). After the active hurricane seasons of 2004 and 2005, many insurance companies withdrew from Florida, resulting in Citizens, the Florida homeowners insurer of last resort, becoming the largest insurer in the state. California wildfires, such as the Camp Fire, which destroyed most of Paradise, and the Tubbs Fire, which resulted in devastating losses in Santa Rosa, have contributed to a reduced availability of homeowners insurance in the Wildland Urban Interface, the zone where the edge of human activity and the wilderness meet.
What drives the difference between light-tailed auto collision losses, heavier-tailed hurricane losses, and ultimately the extreme randomness of the heaviest-tailed catastrophe perils such as wildfire or earthquake? One cause is correlation. Unlike in auto collision insurance, catastrophes affect many risks simultaneously. The probability of me totaling my car is almost entirely unrelated to the probability of you totaling your car. In a wildfire-prone area where wildfires are becoming more common due to drought conditions, your probability of a total loss is closely related to your neighbor’s probability of a total loss. A greater degree of correlation may result in a heavier tail for the aggregate losses. For example, the wildfire losses of 2017 and 2018 were both driven by urban conflagrations, in which a large number of houses in a small geographic area were destroyed in a single event.
For insurers to have the appetite to sell policies they must be able to estimate the average and the variance of losses. In other words, the losses must not exhibit wild or extreme randomness. The tail risk must be made thinner. The good news for insurers and people seeking insurance is that there are a couple of ways to achieve it. The potentially bad news is that they require advance planning and perhaps government action.
As California wildfires demonstrate, one key to thinning tail risk is to reduce correlation. Urban conflagrations occur when fire jumps from one house to another. This can occur easily when the houses are closely spaced together, which unfortunately is a common building practice in California. In order to make houses insurable for the wildfire peril, it’s not enough to reduce the risk for individual houses—it’s essential to reduce the risk of fires spreading between houses too, which means increased spacing and firebreaks. Local and state governments could also arm themselves against insurer exits by taking action to curb correlations such as the ones seen in the fire that destroyed Paradise. For example, they could do this through zoning codes that limit building houses very close together in high-risk areas or requiring homeowners to maintain sufficient defensible space around their houses. Destructive events can offer a unique opportunity to rebuild in a way that reduces the probability of heavy-tailed risk and increases the likelihood that private insurers will continue to write in a particular market.
Another approach is to “cut off the tail.” In 2001 the 9/11 terrorist attacks resulted in an insured loss of about $39 billion dollars. In the absence of government action, this unprecedented loss would have resulted in an insurance availability crisis. To avoid this crisis, Congress passed the Terrorism Risk Insurance Act (TRIA), which provided a government reinsurance program for losses from a terrorist attack under certain conditions. Essentially, this truncated the loss distribution, cutting off the tail and turning a heavy-tailed distribution into a lighter-tailed one. It made the uninsurable insurable.
How to do this is something that should be considered for other heavy-tailed perils, especially those subject to wild or extreme randomness. Currently, some markets subject to catastrophe perils have regulations that not only do not help manage a heavy tail, but actively make covering catastrophe losses more difficult. For example, California regulations prohibit insurance companies from including the cost of reinsurance in homeowners insurance premium, leaving insurance companies with an unenviable choice: risk their solvency by not purchasing reinsurance for extreme events or sacrifice their profitability by undertaking reinsurance expenses for which they are not permitted to charge. Changing this regulation to allow insurance companies to include this cost in rate-making could help insurance companies avoid having to choose between profitability and being subject to potentially ruinous heavy-tailed catastrophes.5 Another possibility is to create a government-sponsored reinsurance mechanism similar to the Florida Hurricane Catastrophe Fund. By truncating the insurers’ losses to a level where the average and variance can be quantified, a reinsurance mechanism might increase the willingness to write homeowners insurance in the riskier portions of the state and could help alleviate the insurance availability crisis and reduce the cost of homeowners insurance.
Conclusion
As an industry, we need to question the assumption—one that drives the math behind catastrophe models—that expected losses can always be quantified. The scientific literature points to potential problems with this assumption,6 even before the effects of climate change increase both expected losses and their variance. But with an understanding of heavy-tailed risk, we may be able to mitigate some of the causes of insurance availability crises.
1 The expected loss is the estimate of the average loss over a long period of time. For example, the expected number of times a fair coin will come up ”heads” is 50%, even though it is possible for long sequences of ”tails” to occur.
2 This is not the only cause of a lack of insurance availability. Another example would be regulations that do not permit companies to charge actuarially sound rates.
3 The idea of infinite expected losses may seem hard to accept because no actual loss could ever be infinite. Infinite expected losses occur when the size of the loss increases faster than the probability decreases. This does assume that there is no maximum loss, which is not really the case. For example, for homeowners insurance the maximum possible loss would be the total insured value of all policies. This is so large relatively to any actual loss that may occur that, for practical purposes, we use the approximation that there is no maximum loss. The difference between these assumptions is the difference between an expected loss being infinite and an expected loss being so extraordinarily large it can’t be quantified. There is little difference between these cases in practice. A second assumption is that each year’s losses are independent, which requires that the exposure distribution does not change by year. This would imply that cities that are destroyed by large urban conflagrations are fully rebuilt, which may not be the case in practice.
4 CEA. History of the California Earthquake Authority (CEA). Retrieved October 7, 2022, from https://www.earthquakeauthority.com/About-CEA/CEA-History.
5 Limited liability is another way of truncating the tail, but it is in the public interest that insurance companies be able to pay their claims. The regulation of insurance solvency requires that the probability of ruin be acceptably small.
6 Holmes, T.P., Huggett, R.J., & Westerling, A.L. (2008). Statistical Analysis of Large Wildfires. In: Holmes, T.P., Prestemon, J.P., Abt, K.L. (eds), The Economics of Forest Disturbances. Forestry Sciences, vol 79. Springer, Dordrecht. Retrieved October 7, 2022, from https://doi.org/10.1007/978-1-4020-4370-3_4.