In Part 5 of the How to Find the Needle in a Haystack series, I reviewed the performance of a number of high-profile stock-picking funds and found the numbers underwhelming.
Ironically, Jim Cramer told Mad Money viewers this weekend that the reason for poor performance is because one must adapt to change, that one must know what your indicator measures, that “there are always new and important indicators popping up that you never cared about, and the old ones outlive their usefulness faster than ever before.” And you have to take his word for it because he’s been in the game since 1979.
This begs a rhetorical question: if the important indicators change all the time, does that mean one can only find out which ones WERE important with a rear view mirror? Do we need forecasts of forecasts?
What about the stylized facts for financial returns? They don’t change, so why not focus on those? Unfortunately, time series analysis has been in the hands of those who didn’t follow the scientific method, namely come up with a hypothesis based on first principles, test it and replicate the results. No, that’s not the way econometrics has been used, but Tim Hardford reports at last, the last the con has been taken out of econometrics:
In 1983, Edward Leamer published an article with contents that would become almost as celebrated as its title. “Let’s Take the Con Out of Econometrics” began with an analogy that remains useful. Imagine an agricultural researcher who tests the effectiveness of a new fertiliser by dividing land into strips and spreading the new fertiliser only on a randomly chosen selection of those strips. Because of the randomisation, any effect will presumably be thanks to the fertiliser.
Contrast this scrupulous scientist, continued Leamer, with two agricultural econometricians. One notices that crops grow under trees and, after taking careful measurements, announces that bird droppings increase crop yields; the other has noticed the same phenomenon and declares that it can, with confidence, be credited to the benign effects of shade.
This is the “identification problem” – trying to work out whether a statistical pattern is caused by what we think it has been caused by. It muddies any statistical analysis of data that have not been generated by a controlled experiment, and it particularly plagues econometricians, the statistical wing of the economics profession. But, complained Leamer, throughout the 1970s they too rarely cared, and much of their work was dubious at best. Leamer was not alone. David Hendry showed in 1980 that by using the standard methods of the day, he could demonstrate that rainfall caused inflation. Or was it that inflation caused rainfall?
That was then. Now Joshua Angrist of the Massachusetts Institute of Technology and Jörn-Steffen Pischke of the London School of Economics have published a new working paper arguing that econometrics has undergone a “credibility revolution”. Angrist and Pischke argue that the identification problem is now being faced head on and for many questions it is being solved. Modern econometrics works.
Given the recent financial crisis, I pause for sceptical chuckles, but academic econometrics is rarely used for forecasting. Instead, econometricians set themselves the task of figuring out past relationships. Have charter schools improved educational standards? Did abortion liberalisation reduce crime? What has been the impact of immigration on wages?
Notice this is exactly the same way people go about trading. No wonder everything works until it doesn’t. Because they only worked by coincidence the first place.
The Baltic Dry Freight Index
Curiously, Cramer talked about the importance of the Baltic Dry Index as an indicator that I questioned last week.
I checked Bloomberg, and unless there is something wrong with my eyes, this is not a new high. But at least there is good news: U.S.-Bound Boxes Pile Up in Asia as Lines Avoid Adding Ships.
MORE: How a 10-Year Winning Streak Ended
MORE: Jordan Kotick: If The Shoe Fits…