Option FanaticOptions, stock, futures, and system trading, backtesting, money management, and much more!

More About KD (Part 2)

Today I will conclude my complete detailing of KD (not Kevin Durant).

KD has an interesting business model that compliments his personal agenda quite well. As mentioned in this third paragraph, developing trading systems is hard. If you purchase his product, then you are eligible to participate in his program. The program collects viable trading strategies that satisfy his system criteria. If you submit a viable strategy that is subsequently profitable over the following six months, then he will give you at least one (sometimes more) passing strategies (e.g. from other customers) from the same or different month in return. With this business model, his own customers feed him viable strategies! This clearly augments his personal agenda as a full-time trader. The program also serves as a productivity multiplier for his customers; for the time it takes to develop one passing strategy, more will be returned.

I cannot recommend you train under KD for one and only one reason: I have not yet had success finding any viable strategies. I mentioned the “specter of scam” in the fourth paragraph here. In case you haven’t looked around lately, fraudulent sales pitches abound in the financial industry. I have written about CNBC’s American Greed (e.g. here and here). I’ve also posted on misconduct in the financial industry. I have to wonder whether KD is bogus, too (and he would want me to based on the 5-part mini course discussed last time). It wouldn’t be the first time a nice personality has engaged in criminal behavior.

The possibility also exists that what KD teaches doesn’t work for the vast majority of his customers. If this is true, then whether he has an ethical obligation to disclose while marketing his wares is a matter for debate.

KD rates well on one particular website that focuses on trader education programs. This site appears to review nearly 300 different education programs (and services). The founder claims to be a once-upon-a-time charlatan who did cold calling sales to swindle investors. He claims to have spent nearly three years in prison for these efforts and has since become one of the “good guys” who knows fraud, Ponzis, and optionScam.com when he sees it.

Interestingly, about the only positive review I can find on this site focuses on KD. A couple public comments linked to that review suggest KD owns the site. I can’t imagine KD would have time to do all the reviews on that site, maintain the entirety of his online presence, and trade for a living. No way.

I suppose he could be delegating responsibility to an entire team, though. Hmmm…

More about KD (Part 1)

I wanted to go into a bit more detail about the guy I discussed here.

It’s not Kevin Durant.

KD is a vendor of trader education. Beware vendors and the inherent conflicts of interest that accompany them. The red flags should fly when I encounter people who have something to sell. I should be knowledgeable about frauds, scams, and claims: some of which have been subjects for this blog (e.g. here, here, and here).

Like some vendors, KD has his own piece to share on steering clear of snake oil salespeople. As clickbait, he offers a free 5-part mini course on due diligence for trading educators. Writing such a piece is good marketing because third person language (it/they) naturally sets himself off from what he is writing about. That is to suggest “I am not one of them.”

KD’s claim to fame is posting taking first place in the 2006 World Cup of Futures Trading Championships in the midst of three consecutive years of 100%+ returns. He was later profiled as a “market master” in The Universal Principles of Successful Trading by Brent Penfold (2010).

Part of enjoying success as a guru or trading educator is perfecting the online marketing piece, which KD has certainly done. I first encountered him as a guest contributor on the System Trader Success blog. He moderates an algorithmic trading contest on the Big Mike Trading website (now futures.io). He was a frequent contributor to SFO, and Active Trader magazines (no longer in existence) along with Futures magazine (still running). He has written at least one print book—Building Winning Algorithmic Trading Systems—along with multiple e-books that he generously circulates. I’ve seen KD post on at least two popular investment/trading forums. He has a number of videos on YouTube. He often does webinars from his home and you can watch him speak with kindness and humility.

As a further way to establish a presence and boost his reputation, KD often posts online responses to negative reviews. Personally, I grow more suspicious when I see responses to negative comments. On its own, I may or may not believe a negative review since many are fake (see second-to-last paragraph). When the subject of the review appears defensive, then I suspect the allegation to be more authentic. I’m guessing KD disagrees else he wouldn’t do this.

KD answers questions online, which is a shining star, and he is really good answering questions via e-mail. I have e-mailed him numerous times. He is a prompt responder. He never responds with any frustration, and he addresses all questions.

One of KD’s common responses to my questions is “there is no right or wrong answer.” I find this to be an interesting phenomenon that I will revisit at a later time.

Trading System Development 101 (Part 3)

Choice of subjective function is just the tip of the iceberg. As the last two posts have made clear, trading system development can be a very individual process.

Granularity of the variable grid is another important detail that has no right answer. A coarse (less granular) grid has fewer potential variable values whereas a fine (more granular) grid will have more. Testing from 10-30 by 10 gives me three values to test (10, 20, 30) whereas testing from 10-30 by 2 gives me 11 values to test. A more granular grid will result in more iterations (third-to-last paragraph here) and more processing time.

If I want to differentiate between spike and high plateau regions, then I really need increased granularity.* Recall the diagram in the last blog post linked. If one [variable] value tests well, then a high plateau region means an adjacent one should too. This makes more sense for 10 or 14 relative to 12 than it does for 10 or 30 relative to 20. Ten or 14 are only 17% away from 12 whereas 10 or 30 are 50% away from 20. Qualitatively, 17% away may be “similar” while 50% away is apples and oranges rendering exploration of the surrounding space impossible. Signal can fall right through the cracks with a coarse grid.

Feasibility testing is where I can tweak the strategy to get a good result. This is curve-fitting on a small percentage of the total data. If this results in an overfit model, then the strategy will be subsequently rejected when I test on a larger portion of data.

I found number of trades to be a recurring challenge with feasibility testing. Basic statistics (see third-to-last paragraph here) dictates preference of larger sample sizes when evaluating test results. I don’t feel confident in the outcome when one or two routine trade results can drastically skew the result. I’m not looking for an enormous number of trades since this is just a 2-year feasibility test, but if I get too few then I’ll be concerned. What minimum number to accept—all together now!—has no right answer.

I have a few ideas on how to increase number of trades. A strategy that closes or reverses direction when the opposite trigger occurs is fine as long as the triggers occur with some degree of regularity. If only 1-2 triggers occur per year then over two years I may get a tiny number of trades. I’ve tried implementing profit targets, stop losses, and m-day exits to increase number of trades. Feasibility testing is the time to do this; if I get a tiny sample size of trades over the full dataset then I must junk the strategy and move on to avoid curve fitting.

* — Of course, if subsequent steps in the development process don’t do this then why be concerned at all?

Trading System Development 101 (Part 2)

Today I am discussing unknowns in the feasibility testing phase.

Variable range selection can significantly affect results and may have no correct answer. If I have a short-term signal and I test a strategy over a short-term range (e.g. 10-30), then I am more likely to hit the critical value of 70% profitable than if I test over a mixed range (e.g. 10-90) or a longer-term range (e.g. 30-90).

Number of iterations can also significantly affect results and may have no correct answer. In terms of granularity percentage, success or failure of one iteration contributes 2% of the total for 50 iterations vs. 0.2% of the total for 500 iterations (target = 70%).

What segment of the data to use for feasibility is another important detail that may have no right answer. Doing feasibility testing on one 2-year period that represents a particular market environment may generate significantly different results than feasibility testing on another 2-year period. In trying to find a strategy for any given futures market, feasibility testing over multiple environments would be ideal.

Testing over multiple market environments first requires a listing of said environments. This is a subjective task (vulnerable to hindsight bias) that also has no right answer. Were I to pursue this, then I should also determine how often the different environments occur. This might feed back to help determine whether this is worth doing at all (e.g. is it worth testing on exceedingly rare market conditions?).

In my testing thus far, I have eschewed all this and simply chosen to rotate the 2-year feasibility period. How I do the rotation probably doesn’t matter as long as I do it to give strategies suited to different market environments a chance. If I end up testing 100 strategies on a futures market, then maybe I test 10 on each 2-year period within the full 10 years. I will have false negatives, as I discussed in Part 1, but such remains an inevitable reality of this approach.

I have to be a bit lucky to get a strategy to pass feasibility testing, which brings to mind two possibilities. First, it’s okay to rely on some luck and incur some false negatives since I have an infinite number of potential strategies to test. Because feasibility failure decreases the possibility that I’m dealing with a viable strategy (see last long paragraph in Part 1), I should feel good about moving on to the next candidate and minimizing wasted time.

Alternatively, perhaps some method exists that eliminates false negatives (and any reliance on luck) by testing everything. I think this would require an enormous amount of processing power (and programming), though. I already encounter processing delays with my mediocre hardware: a 10-year backtest with 70 iterations takes up to 25 minutes. Many people backtest with hundreds (or even thousands) of iterations, which would take my computer all night to run.

To feasibility test or not to feasibility test differ in how time will be spent. With feasibility testing, extra time will be spent testing additional strategies since viable strategies are inadvertently dismissed. Without feasibility testing, extra time will be spent testing strategies that would have otherwise been dismissed in feasibility over a much longer period.

I will continue next time.

Trading System Development 101 (Part 1)

Today is the day I start talking more specifically about the trading system development process.

Numerous articles on trading system development methodology can be found on the internet. For this reason, I won’t go into extreme detail. I do hope to delve into a few deeper theoretical issues.

As discussed here, I am trying to come up with a model that has a high SNR. The Eurostat website says:

     > Statistical tests of a model’s forecast performance are commonly conducted
     > by splitting a given data set into an in-sample [IS] period, used for the initial
     > parameter estimation and model selection, and an out-of-sample [OS] period,
     > used to evaluate forecasting performance.

I previously showed how I can use IS data to make a backtested strategy look really good. This is what I want to avoid. Ultimately, the only thing I care about is that the strategy perform well on OS data, which has yet to be seen.

I expect a model that trains on some data (IS) to test well on that data (IS). I need to find out whether this correlates with how it will test on future data (OS). To do this, I train (develop) the model on IS data and test the model on OS data.

I start with feasibility study, which will be limited to a small percentage of the total data available. Maybe I designate two out of 10 total years of data to be used for feasibility testing. For each variable in the strategy, I then want to define ranges and test strategies across those ranges. I discussed this in the second-to-last paragraph here. I will then designate a strategy worthy of further consideration if at least 70% of the iterations are profitable (using net income as my subjective function).

Understand that feasibility study includes arbitrary features that should be determined ahead of time by the individual developer. How many years should be used for feasibility testing? I said “small percentage;” there is no right answer. What percentage of iterations must test well in order to pass feasibility? I said 70%; there is no right answer. What fitness function to use? No right answer. What small percentage just mentioned will be selected to be used for feasibility testing? I will discuss this one in further detail later, but here’s a sneak peak: no right answer.

The whole notion of feasibility study is not without critique due to the possibility of false negatives. No strategy performs well at all times, and I will miss out on a viable strategy if the feasibility study happens to be a portion of time where the strategy lags. I’m playing the probabilities* here. A viable strategy is more likely to perform well during more periods. Given this, a viable strategy is more likely to pass feasibility. None of this is ever a guarantee.

I will continue discussion of other unknowns in the next post.

* — The best we can ever hope for when it comes to trading and investing.

Fire and Fortitude in Algorithmic Trading (Part 2)

In order to be an algorithmic trader, I think one must have a burning desire (fire) to beat the market as well as the fortitude to plod through significant failure with regard to strategy development.

I left off with mention of KD’s approach to algorithmic trading. He also says discovery of the first viable strategy may take an extra long time with subsequent viable strategies becoming easier.

I think one of the biggest challenges to trading system development is sticking with the process and maintaining the drive to continue despite serial failure. I’m so here to tell you: getting knocked over the head so many times is absolutely brutal. As a pharmacist, most everything I did either accomplished something or got me one step closer. With trading system development, I need to reprogram myself to get positive reinforcement from failure. I need to regard every rejected strategy as being one step closer to viability.

In addition to serial failure, I think the specter of scam combines to create a harsh double whammy that few people can overcome. Until I find a viable strategy on my own, I have only others to trust. As failure mounts, a belief that the process is somehow flawed becomes harder to resist. Maybe the talk of viable strategies is all just a good marketing story used to pad the pockets of people like KD. How do we know he has actually found success himself? This is consistent with the comments from [11]. I have reached out to almost 15 algorithmic traders to share experiences and have only gotten two responses. I would not be surprised if most spin their wheels for a time and subsequently exit the arena.

I am not saying KD is a con artist, but like the broader financial industry, he does have a strong underlying motive to get us to believe. If nobody believes, then his business selling trading education will struggle. For the broader financial industry, when people stop believing and sell everything, investment advisors experience a severe decline in assets under management and generate less revenue. They therefore have an inherent conflict of interest when recommending clients “stay the course” and ride out market turbulence.

Fire and Fortitude in Algorithmic Trading (Part 1)

Today I want to review where I’ve been (starting here) in this discussion of algorithmic trading and then move forward.

I have three main critiques of technical analysis (TA). First, it’s not always objective (as discussed here and here). Second, it’s rarely backed by supporting evidence. Finally, it seems really difficult to find viable trading strategies.

With regard to critique #3, I feel like I want to climb to the nearest mountaintop and yell “nothing works, folks!” An infinite number of possible strategies await to be tested, though, as mentioned in [1]. If I have 40 different indicators then I can test them on 20 different markets, throw in daily + 4h + hourly + 5m time frames, and voila: 3,200 strategies. What I have tested in my first three months is tiny in comparison.

Seldom have I experienced failure like I have in my brief stint as a quant. One by one, I saw hallmarks of popular TA fall by the wayside: momentum, moving averages, Bollinger Bands, RSI, Williams %R, trend-following, mean-reversion, etc. None of these have worked sufficiently well over a long period of time, and many are losers. I went through books with each setup presented accompanied by [deceptive?] claims of live-trading success on different markets. None of these worked either. Having immersed myself in the financial domain for a while, I’m not surprised to see an indicator or two not work very well. I have read books, articles, and sat through countless presentations these indicators over the years, though, and failure to find a single viable strategy, is quite the shock.

My discipline and commitment have eroded over time. I started by testing one strategy per day. As I learned the process better, I increased to two, five, and eventually 10/day. I started to run lean of strategy ideas. It’s very important to have this in strong supply because once my time had to be devoted to finding strategies, my daily testing decreased and I felt less productive as a result. Up until that point, I was able to achieve my daily goal, but as I watched the tally of total strategies tested continue to mount, my frustration and fatigue got real.

What I experienced is to be expected according to algorithmic trading guru KD. KD is a vendor of algorithmic trading education. He has my respect (until he doesn’t, and I’ll let you know if that happens). He says we can expect to find one viable strategy for every 100 – 200 strategies tested. He also says to break for a few weeks if you start to get frustrated and impatient.

In other words, those who undertake this trading approach will need to settle in for a very long haul…

Does Any Technical Analysis Work? (Part 6)

Today I conclude with my internet sample of opinions on whether technical analysis (TA) can actually work.

Here’s an excerpt from an article by Proinsias O’Mahony:

     > While TA remains widely used, that doesn’t mean it’s not bunkum.
     > Indeed, many technical traders would be the first to accept that
     > the field is full of charlatans. As bond expert and author Martin
     > Fridson has written: “The only thing we know for certain about
     > TA is that it’s possible to make a living publishing a newsletter
     > on the subject.”

See my post here.

     > Such newsletters are full of references to obscure Japanese
     > candlestick chart patterns, Elliott Wave theory, Fibonacci
     > numbers, and all kinds of other vague and unverified assertions.

I wrote about some of this here.

After all the opinions in these five posts, what do we have?

The four excerpts from Part 2 are optimistic and give me hope.

The two excerpts from Part 3 suggest that anything we find in books, seminars, or on the internet will not work.

The commentary from Part 4 suggests that no simple indicators will work and that bots available for purchase will not work over the long term. The latter is not new because I don’t expect any system to work forever.

[9] – [11] in Part 5 argue that no bots for sale will ever work, no .pdf guidelines or strategies based in TA will work, no simple TA-based systems will work, and that institutions advertising advanced TA-based departments with oodles of computing power and academic doctorates are really for marketing purposes only.

Today’s excerpts argue against TA like most of the others I have surveyed.

Of all this, I have to hope [1] – [4] is correct. It may not be easy, though. Although my initial impressions do not concur, at least one prolific writer in the algorithmic trading space says simple strategies are all we need.

As an aside, O’Mahony also provides some ammunition in case I ever want to sound off on fundamentals again (see second-to-last paragraph here):

     > The same can be said of fundamental analysis, however.
     > Billionaire investor Mark Cuban once scoffed that “fundamentals
     > is a word invented by sellers to find buyers… metrics created
     > to help stockbrokers sell stocks, and to give buyers reassurance
     > when buying stocks.”
     >
     > That may be cynical, but it’s a documented fact that most
     > fundamental managers underperform the markets. Most investors
     > would be better adopting a buy-and-hold approach rather than
     > painstakingly studying stockbroker notes in a futile attempt
     > to gain an edge.

Do you know if any successful traders even exist?

Next time, I will come back to discussion of my personal experience thus far.

Does Any Technical Analysis Work? (Part 5)

Today I continue with a sample of internet opinions on whether algorithmic trading with TA can actually work.

     > If someone created a bot that is profitable with good metrics,
     > they would never sell it. Ask yourself this: if you have an ATM
     > that magically refills itself every night, would you sell it for
     > petty cash? The ones selling bots are… not producing a
     > profit… [9]

I’ve heard this many times and find it to be a compelling, but not ironclad argument. If I had a large chunk of money then I could certainly turn it into millions. If I had little money, then I might still benefit from selling to others for point-of-sale profit while I execute it myself and make a high return percentage on a small amount. Alternatively, for the sake of diversification I might never want to trade it in large size. Since I could not count on making too much money from this one system, I might as well sell it to others to get some guaranteed money while I continue work on developing others.

     > “There are a lot of people arguing they can make profit just
     > using TA, following some pattern or indicator rule.” A lot of
     > idiots, zeros, make a few dollars a month selling PDFs explaining
     > how to “make millions with simple patterns or indicator rules!!”
     > In contrast… few quants have indeed made a lot of money
     > using incredibly difficult and sophisticated software systems,
     > which have utterly no connection whatsoever to some trivial
     > pattern rules. The bots you have made and you refer to are a
     > joke—the equivalent of looking up spelling in a dictionary
     > file—whereas quants are like Siri. [10]

I’ve written much in this blog about optionScam.com. Like me, this person has also observed the financial industry to have some nefarious (see third paragraph here) contributors.

     > “I just wanted to know if those public and typical strategies
     > work or if they are just scam to fool beginners like me.” I think
     > they are a scam to fool everyone! You know how large brokerage
     > companies will have a large “TA” department, headed by major
     > experts making millions per year and dozens of PhD staff. Many
     > traders would tell you it’s an utter scam—they just add
     > those departments purely so they can say to clients “well,
     > we have this impressive TA department…” you know? [11]

I’m not a conspiracy theorist (see third paragraph here), but given that a decent amount of misconduct exists in certain segments of the financial industry, I think we need to acknowledge the possibility described. I think the only way to know with more certainty would be to work in the industry proper—maybe for multiple firms to gain perspective on tactics used by a large sample size of institutions.

I will wrap up these comments next time.