The great Indian mystic Yogi Berra once said "In theory, there's no difference between theory and practice. In practice, there is." There is also the story about the economist who saw something working in practice and ran hurriedly back to his study to see if it worked in theory.
We have a view of experts that they know a certain amount, and so if we need to make a guess into the unknown, they are staring out from a higher platform, and are thus better prepared to make that guess. Nicholas Nassim Taleb states that his experience is the opposite: experts who guess beyond what they know are actually worse estimators than talented amateurs who have an awareness that they are guessing. His thought is that solid knowledge for any decision can be sparse, and "experts" who study a topic move quickly in to the area of guesswork and speculation. But because these guesses are in consensus and are part of a culture, the experts come to believe that this knowledge is just as solid as the basic data. This spins quickly out of control into intellectual fashions that the experts are agreed on, but are shakier and shakier with each successive floor added to the building.
Taleb suggests a reverse rule. Every yard higher that an expert believes he knows beyond what he actually does know should be subtracted from his real knowledge, as it represents something that will need to be unlearned, which is difficult. Something like "absolute value," if you remember that concept from high school algebra - guessing a furlong beyond one's actual knowledge is equivalent to being a furlong short in actual knowledge. Thus, even a brilliant person with a lot of knowledge can throw it away and become useless by pretending to know what is actually unknown. Thus the virtue of humility and/or the discipline of skepticism become as important as knowledge.
I did mention that "skepticism" has only recently come to be applied primarily to religious questions, didn't I? Until the 20th C (maybe late 19th), it applied to being skeptical about other knowledge and how we arrived at it.
Taleb suggests a reverse rule. Every yard higher that an expert believes he knows beyond what he actually does know should be subtracted from his real knowledge, as it represents something that will need to be unlearned, which is difficult.s
ReplyDeleteFor starters, consider Paul Krugman and Noam Chomsky. I have seen them wrong often enough that I initially discount about all they say.
There's a story about an experimental physicist who excitedly brought the results of his latest study to the local theorist. The latter told him there was no need to have done the study, since the results were easily predictable--and he went on to explain exactly why the points should lie on the curve they did. The experimentalist became a bit bewildered, and realized he had presented the graph upside-down. When he corrected his mistake, the theorist said Of course, this is even more obvious, and went on to explain the new look of the curve.
ReplyDeleteI have read that almost everything Paullie "The Beard" Krugman has written is equally divided between for and against the same things.
ReplyDeleteAs the love of money is the root of all evil so the love of theory is the root of all folly.
ReplyDeleteIn physics the rule is that you don't have a measurement until you understand the error on the measurement. By the same token, you don't understand a model until you understand the limits of the model.
ReplyDeleteBut it's so easy to just work with the model...
(Slogans are a kind of model too)
"Working with a model" should mean letting the model crank out a result by doing a series of complicated transactions that it saves us from having to do by hand one by hand. Then you'd check that result against experimental results and see if your model expressed a good hypothesis. What bad model scientists are doing now isn't working with a model, it's treating a model as if it were a substitute for evidence, even to the point of believing its predictions are cast in stone, no matter how bad its performance has been to date. It's unutterably weird to have conversations with people who have apparently lost the ability to distinguish between checking a prediction after you've generated it, and back-fitting a curve to force it to fit some data. They think if they keep curve-fitting every time it fails, that means they've made it more accurate as a predictor.
ReplyDelete