In the last few days I finished reading the "Black Swan" by Nassim Nicholas Taleb. Around last January I saw Günther Palfinger mentioning it in my G+ stream, looked it up and bought it.
At first, the book seemed to present some interesting ideas on error statistics and the first 20 or 30 pages are giving good examples for conscious knowledge we posses but don’t apply in every day actions. Not having a trading history like the author, I found reading further until around page 100 to be a bit of a drag. Luckily I kept on, because after that Taleb started to finally get interesting for me.
One of the lectures I attended at university touched on black box analysis (in the context of modelling and implementation for computer programs). At first of course the usual and expected or known input/output behavior is noted, e.g. calculus it may perform or pattern recognition or any other domain specific function. But in order to find out hints about how it’s implemented, short of inspecting the guts which a black box won’t allow for, one needs to look at error behavior. I.e. examine the outputs in response to invalid/undefined/distorted/erroneous/unusual inputs and assorted response times. For a simple example, read a sheet of text and start rotating it while you continue reading. For untrained people, reading speed slows down as the rotation angle increases, indicating that the brain engages in counter rotation transformations which are linear in complexity with increasing angles.
At that point I started to develop an interest in error analysis and research around that field, e.g. leading to discoveries like the research around "error-friendliness" in technological or biological systems or discoveries of studies on human behavior which implies corollaries like:
-
To enable speedy and efficient decision making, humans generally rely on heuristics.
-
Displaying heuristic behavior, people must make errors by design. So trying to eliminate or punish all human error is futile, aiming for robustness and learning from errors instead is much better.
-
Perfectionism is anti-evolutionary, it is a dead end not worth striving for. For something "perfect" lacks flexibility, creativity, robustness and cannot be improved upon.
Now "Black Swan" defines the notion of a high-impact, low-probability event, e.g. occurring in financial trading, people’s wealth or popularity - events from an extreme realm. That’s in contrast to normally distributed encounters like outcomes of a dice game, people’s body size or the number of someone’s relatives - encounters from a mediocre realm.
Here’s a short explanation for the mediocre realms. Rolling a regular dice will never give a number higher than 6 no matter how often it’s thrown. In fact, the more it’s thrown, the more even it’s numbers are distributed and the clearer its average emerges. Measuring people’s weight or number of relatives shows a similar pattern to throwing a dice, the more measurements are encountered the more certain the average becomes. Any new encounter is going to have lesser and lesser impact on the average of the total as the number of measurements increases.
On the other hand there are the extreme realms. In trading or wealth or popularity, a single encounter can outweigh the rest of the distribution by several orders of magnitude. Most people have an annual income of less than $100k, but the tiny fraction of society that earns more in annual income possesses more than 50% of the entire distribution of wealth. A similar pattern exists with popularity, only very few people are so popular that they’re known by hundreds of thousands or maybe millions of people. But only very very few people are super popular so they’re known by billions. Averaging over a given set only works for so long, until a high-impact "outlier" is encountered that dominates the entire distribution. Averaging the popularity of hundreds of thousands of farmers, industrial workers or local mayors cannot account for the impact on the total popularity distribution by the encounter of a single Mahatma Gandhi.
Taleb is spending a lot of time in the book on condemning the application of the Gauss distribution in fields that are prone to extreme encounters especially economics. Rightfully so, but I would have enjoyed learning more about examples of fields that are from the extreme realms and not widely recognized as such. The crux of the inapplicability of the Gauss distribution in the extreme realms lies in two things:
-
Small probabilities are not accurately computable from sample data, at least not accurately enough to allow for precise decision making. The reason is simple, since the probabilities of rare events are very small, there simply cannot be enough data present to match any distribution model with high confidence.
-
Rare events that have huge impact, enough impact to outweigh the cumulative effect of all other distribution data, are fundamentally non-Gaussian. Fractal distributions may be useful to retrofit a model to such data, but don’t allow for accurate predictability. We simply need to integrate the randomness and uncertainty of these events into our decision making process.
Now Taleb very forcefully articulates what he thinks about economists applying mathematical tools from the mediocre realms (Gauss distribution, averaging, disguising uncertain forecasts as "risk measurements", etc) to extreme realm encounters like trade results and if you look for that, you’ll find plenty of well pointed criticism in that book. But what struck me as very interesting and a new excavation in an analytical sense is that our trends towards globalisation and high interconnectedness which yield ever growing and increasingly bigger entities (bigger corporations, bigger banks, quicker and greater popularity, etc) are building up the potential for rare events to have higher and higher impacts. E.g. an eccentric pop song can make you much more popular these days on the Internet than TV could do for you 20 years ago. A small number of highly interconnected banks these days have become so big that they "cannot be allowed to fail".
Considering how humans are essentially functioning as heuristic and not precise systems (and for good reasons), every human inevitably will commit mistakes and errors at some point and to some lesser or larger degree. Now admitting we all error once in a while, exercising a small miscalculation during grocery shopping, buying a family house, budgeting a 100 people company, leading a multi-million people country or operating a multi-trillion currency reserve bank has of course vastly different consequences.
So the increasing centralisation and increasing growth of giant entities ensures that todays and future miscalculations are disproportionally exponentiated. In addition, use of the wrong mathematical tools ensures miscalculations won’t be small, won’t be rare, their frequency is likely to increase.
Notably, global connectedness alerts the conditions for Black Swan creation, both in increasing frequency and increasing impact whether positive or negative. That’s like our modern society is trying to balance a growing upside down pyramid of large, ever increasing entities on top of its head. At some point it must collapse and that’s going to hurt, a lot!
The third edition of the book closes with essays and commentary that Taleb wrote after the the first edition and in response to critics and curios questions. I’m always looking for relating things to practical applications, so I’m glad I got the third edition and can provide my personal highlights to take away from Taleb’s insights:
-
Avoid predicting rare events. The frequency of rare events cannot be estimated from empirical observation because of their very rareness (i.e. calculation error margin becomes too big). Thus the probability of high impact rare events cannot be computed with certainty, but because of the high impact it’s not affordable to ignore them.
-
Limit Gauss distribution modeling. Application of the Gauss distribution needs to be limited to modelling mediocre realms (where significant events have a high enough frequency and rare events have insignificant impact); it’s unfortunately too broadly abused, especially in economics.
-
Focus on impact but not probability. It’s not useful to focus on the probability of rare events since that’s uncertain. It’s useful to focus on the potential impact instead. That can mean to identify hidden risks or to invest small efforts to enable potentially big gains. I.e. always consider the return-on-investment ratio of activities.
-
Rare events are not alike (atypical). Since probability and accurate impact of remote events are not computable, reliance on rare impacts of specific size or around specific times is doomed to fail you. Consequently, beware of others making related predictions and/or others relying on them.
-
Strive for variety in your endeavors. Avoiding overspecialization, learning to love redundancy as well as broadening one’s stakes reduces the effect any single "bad" Black Swan event can have (increases robustness) and variety might enable some positive Black Swan events as well.
The Black Swan idea sets the stage for further investigations, especially investigation of new fields for applicability of the idea. Fortunately, Nassim Taleb continues his research work and has meanwhile published a new book "Antifragile - Things that Gain from Disorder". It’s already lying next to me while I’m typing and I’m happily looking forward to reading it. ;-)
The notion of incomputable rare but consequential events or "errors" is so ubiquitous that many other fields should benefit from applying "Black Swan"- or Antifragile-classifications and corresponding insights. Nassim’s idea to increase decentralization on the state level to combat escalation of error potentials at centralized institutions has very concrete applications at the software project management level as well. In fact the Open Source Software community has long benefited from decentralized development models and through natural organization avoided giant pitfall creation that occur with top-down waterfall development processes.
Algorithms may be another field where the classifications could be very useful. Most computer algorithm implementations are fragile due to high optimization for efficiency. Identifying these can help in making implementations more robust, e.g. by adding checks for inputs and defining sensible fallback behavior in error scenarios. Identifying and developing new algorithms with antifragility in mind should be most interesting however, good examples are all sorts of caches (they adapt according to request rates and serve cached bits faster), or training of pattern recognition components where the usefulness rises and falls with the variety and size of the input data sets.
The book "Black Swan" is definitely a highly recommended read. However make sure you get the third edition that has lots of very valuable treatment added on at the end, and don’t hesitate to skip a chapter or two if you find the text too involved or side tracking every once in a while. Taleb himself gives advice in several places in the third edition about sections readers might want to skip over.
Have you read the "Black Swan" also or heard of it? I’d love to hear if you’ve learned from this or think it’s all nonsense. And make sure to let me know if you’ve encountered Black Swans in contexts that Nassim Taleb has not covered!