Is market timing using widely used, publicly available data possible? One widely-used model, the Fed model, has largely been discredited on a number of grounds. That is not going to stop many analysts from trying to generate market-timing models – the potential gains are simply too big. One need only look at this graph from Ticker Sense to see the attraction in market timing.

We noted yesterday a piece by Mark Hulbert in the New York Times that examines a market timing model derived from the Value Line Investment Survey. Barry Ritholtz spends some more virtual ink on the model and admires the fact that the inputs into the model are "unbiased and uncorrupted." Those are indeed admirable qualities and the Value Line based model is an interesting one.

Although we have mentioned it previously some analogous data is also available from Morningstar. Like Value Line, Morningstar has dozens of analysts who generate fair value for hundreds (if not thousands) of companies using a consistent process. Morningstar compiles the deviation from fair value for their universe in a series of graphs. As you can see the model dipped decisively into undervalued territory last week which coincided nicely with the bounce we saw. We do not remember seeing any statistical work done on this measure, in part because it has a shorter history than the VL system, but it would be worth examining.

Paul Kedrosky (via pointed to a (soon to be) published paper by Kenneth L. Fisher and Meir Statman, "Market Timing in Regressions and Reality" (pdf) that compares the market timing ability of valuation based measures, like P/Es and dividend yields, against sentiment measures. While they find the sentiment measures superior to the valuation measures, neither were particularly useful. This may be due in part to the difficulty in parsing the two measures with any great foresight. You can access more of Prof. Statman's research here.

A quick aside. One reason we have been highlighting seasonality based research is that it is to a degree model-free. One does not need to generate potentially unstable parameters to utilize a seasonality-based model.

The bottom line is that it is not all together difficult to derive an interesting market timing model. Let loose enough finance students with the data and a stats package and you will come up with something. The challenge is in successfully implementing any such model. The question is what do you do when the model hits a rough patch?

For instance most valuation derived models would have had you sit out the last couple of years of the Internet bubble. In retrospect that would have been the right call, but at the time the regret generated by missing the seemingly daily gains would have been excruciating. The same goes for when a model signals a buy. Those usually come at points when the market is going through all sorts of distress.

During those times of under-performance the temptation is to ignore the signal from the model. The other temptation is to tweak, or even throw out, the existing model in favor of a new and improved model that accommodates our biases towards the market at hand. If the model is a solid one, then both of these decisions would be mistakes.

So while the quest for the holy grail of investing, a foolproof market timing model, continues it is important to realize that the model itself is only one part of the equation. Coming to grips with the psychological aspects of model-based investing is just as important.