Option FanaticOptions, stock, futures, and system trading, backtesting, money management, and much more!

Perspectives of a Financial Adviser (Part 2)

I recently had an e-mail correspondence with a financial adviser about a lack of inferential statistics provided by Craig Israelsen in the article I wrote about here. Today I will discuss more of her points.

In an e-mail, the adviser continued:

     > And yet as indicated above, even the most rigorous
     > methods of behavior modeling or predictive algorithmic
     > trading may not remain valid… in the real world of
     > traders and cheaters and wars and unpredictable storms
     > or droughts or inexplicable shortages of sugarcane.
     > The environment in which the science is applied is not
     > static nor predictable. They will allow you to “play a
     > better hand of poker” so to speak, but are not a
     > formula in the strictest sense. Finance as a science
     > is not yet, (and may never be) well-developed enough
     > to agree on basic assumptions about the environment
     > of application…

I consider this a cop-out. The disclaimer “past performance is no guarantee of future results” is cliché and factored into any valid system development methodology. Historical events will never replicate in the future but every interval of time has its own set of potentially market-moving elements. The implementation of statistics is to chop, dice, and recombine a comprehensive survey of the past in a sufficiently large number of ways to allow for study of net returns and drawdowns distributions for comparison—all with the implicit understanding that this is a game of probabilities rather than certainties.

And in most cases, I believe some effort to do this modeling is better than no effort at all.

     > If you’re asking, should he report the outperformance
     > if it’s not found to be statistically significant… I
     > will say that it is not commonplace in retail investing
     > (non-institutional and non-academic) to report
     > outperformance with statistical significance measures.

The lack of peer review rears its head once again.

In my opinion, a lack of statistical significance means the data cannot be used to formulate reliable conclusions (perhaps excepting conclusions to the contrary). Retool the study and do what it takes to get statistical significance so we can at least quantify the possibility of fluke occurrence.

This lack of peer review and conventional reporting of conclusions without statistical analysis brings me back to the question asked in my last post: is this the “knowledge” that filters out to the retail public or does the public get what the academics publish in peer-reviewed journals? If it’s the former then my gut reaction is to be scared while at the same time reaching the epiphany that this may be why I perceive so many flaws in reasoning when I review financial material.