Chapter 3 Only.
By: Nassim Nicholas Taleb
Chapter 3 – Fat Tails and Their Effects, An Introduction
- Spotting fat tails through negative empiricism:
- As we gather information, we can rule (other) distributions out.
- If we see a 20 sigma event, we can rule out that the distribution is thin-tailed.
- If we see no large deviation, we can not rule out that it is not thick tailed (unless we understand the process very well).
- Requires a lot of (historic) data.
- Fat tails have low probability events that need time to occur.
- Law of large numbers is slow to apply.
- The “past’s past” doesn’t resemble the “past’s future”.
- Accordingly, today’s past will not resemble today’s future.
- Leads to a lot of mistakes in interpreting historic data (naïve empiricism).
- The impact of rare fat tail events typically doesn’t show up in historic data.
- Can’t compare thick tailed phenomena with thin tailed ones.
- For instance, doesn’t make sense to compare Ebola and smoking death/day historic data.
- Difficult to interpret: mean, standard deviations and variance not useful.
- There is a wedge between the population and sample metrics.
- Sampled data is an insufficient sampling from a broader phenomenon.
- Standard statistical measures work within, but not outside the sample.
- Mean is unstable: rare events determine the mean and rare events take a lot of data to show up (98% of observations are below the mean).
- Standard deviation and variance fail when applied outside the sample.
- Power law distributions.
- Probability of 2x divided by probability of 1x = P(4x) divided by P (2x), etc.
- “Scalability”.
- Some implications of fat tails.
- Maxima: the difference between past maxima and future expected maxima is much larger than for thin tails.
- Ruin: more likely to come from a single extreme event than from a series of bad episodes.
- Catastrophe principle: (uncapped) insurance doesn’t work when there is a risk of catastrophe.
- Black Swans: not “more frequent” (as it is commonly misinterpreted), they are more consequential.
- Examples: almost all economic variables are thick tailed. This is the main source of failure in finance and economics.
- Dimension reduction (big data factor analysis) does not work.
- Bayesian analysis becomes difficult: you need a reliable prior. Not readily observable in fat tails.
- Managing risk:
- Thin tails: reducing the probability (frequency) of events. We count events and aim at reducing their counts.
- Fat tails: Reducing the effect should such an event take place. We do not count events, we measure their impact (lower harm).
- Path dependence.
- If I iron my shirts and then wash them, I get vastly different results compared to when I wash my shirts and then iron them.
- Risk tolerance.
- Some analyses label people as paranoid for overestimating small risks, but don’t get that if we had the smallest tolerance for collective tail risks, we would not have made it for the past several million years.
- Single forecasts versus distributions of outcomes.
- The psychological literature focuses on one-single episode exposures and narrowly defined cost-benefit analyses.
- Mistake to talk about probability as a single numbers, binary outcomes.
- Make decisions based on probability of the full distribution of outcomes.
- Focus on the effect, outcomes of variables.
- The properties of the underlying variable X may be thick-tailed and unpredictable.
- The properties of the effect of X may be easier to analyze.
- Exposure is more important than the naive notion of “knowledge”, that is, understanding X.
- Rationality of common people.
- People are more calibrated to consequences and properties of distributions than psychologists claim.
- Conclusions:
- Fat tails are very different from thin tails.
- Path dependent (time probability) is very different from non-path dependent (ensemble).
- If something is fat-tailed, the importance is how it reacts to random events (harm).
- Fragile, robust, anti-fragile.
- More effective to focus on being insulated from the harm of random events.
- Detection heuristics are better than fabricating statistical properties.
- How to detect and measure convexity and concavity.
- This is much, much simpler than probability.
- We want simple things that work.