Jump to content

Welcome to the new Traders Laboratory! Please bear with us as we finish the migration over the next few days. If you find any issues, want to leave feedback, get in touch with us, or offer suggestions please post to the Support forum here.

BlueHorseshoe

Market Wizard
  • Content Count

    1399
  • Joined

  • Last visited

Everything posted by BlueHorseshoe

  1. Thanks - this type of post provides a perspective I don't have. Just to be clear, are you saying that short term participants will "squeeze" longer term participants, or visa-versa? Is there any kind of integrity/consistency to the motives of long term participants upon which such strategies could be realised? BlueHorseshoe
  2. Hi Steve, The first paragraph above seems useful info - please could you clarify and expand upon it a little? Thanks, BlueHorseshoe
  3. Hi MightyMouse, Some orders get placed miles away from the last traded price and aren't expected to trade, just intended to spoof the order book, I guess. And some firms probably do control most of the order flow in some thinly traded equities. And some institutions seemingly use iceberg type algos for execution that have randomising elements exactly as TheDude describes. All of this is, of course, completely anecdotal - I have no first hand experience of how a market-maker or large institution operates. Just to progress the discussion, I think it is perfectly plausible to have a profitable strategy that has randomly generated entry orders . . . And I think that people are continuing to confuse the question of whether price is random with the question of whether price is predictable (by which I mean conforms to probabilistic models). The two need not be mutually exclusive. BlueHorseshoe
  4. Developers have acknowledged the use of random cycle generators, it seems, but I think this particular application is just informed speculation (it came from an Eric Hunsader article). The idea seems to be to try and screw other participants' read of market depth. Any suggestions as to why else certain firms would spit random orders (and also orders arranged in pretty but pointless geometric patterns) into the market? BlueHorseshoe
  5. Hello, That's not true. Consider the following time series: 2,792,34,4,7,2,567,114,9,1,8,8,441,16,93,4 What's the pattern? The pattern is that the number doubles - 2,4,8,16 . . . But you can't see the pattern because I have disrupted it with randomly generated numbers. I know what the numbers are, so I can simply discount them to see the underlying sequence, but you don't know which numbers are random and which are pattern generated. I have added noise to the time series. It has been widely reported that certain market making firms insert orders of random size, at random price, and with random cycle frequency, into the markets to add noise that they can decode but that will disrupt the way that price and volume data is perceived by other particpants. Furthermore, consider that random order placement and random markets are not synonymous. BlueHorseshoe
  6. Jim Simmons. Hidden Markov Models, allegedly. But that's not a great deal of use to you or me. Nor is knowing whether anyone posting here uses TA successfully (the answer is "yes" - law of large numbers!). Here is a possible alternative question - 'Day Trading vs Swing Trading - Who Expends Least Effort for Proportionally Maximal Gain?' Just a thought . . . BlueHorseshoe
  7. It's worth noting that the two components, win probability and win:loss ratio, must each be calculated over a large enough sample size to be statistically significant - this means lots of trade data, and not simply minimal trade data derived from lots of price data. Kelly is supposedly what Larry Williams used. BlueHorseshoe
  8. Certainly, less than the entire account size can be bought. The idea is to be fully invested for the typical trade. If a trade is deemed likely to be atypical - a likely outlier in terms of dollar excursion from the entry price (due to what I am calling "volatility"), then the idea is to decrease the position size. "Fully invested" means only for the portion of total equity that is allocated to that class of instruments. So the "equity" in my pseudocode refers only to a portion of total equity (in the EL code I have sent, I have plugged in the figure of 5k to avoid the confusion of further variables). Priority of allocation within a class of instruments (such as equity indices - the ES and the NQ in your example) is based on a different selection metric altogether, so their relative volatilities would not be considered by the strategy. The purpose of the code, taking (equity/close) as a base scenario, is to try and make all trade outcomes as much like the average trade outcome as possible, by accounting for the predicted price behaviour (MAE/MFE) relative to average price behaviour. The typical trade provides the average outcome, and the average outcome is aligned with (equity/close), with everything else scaled around this. BlueHorseshoe
  9. Hi Onesmith, Good to see you around the forum again and thanks for your reply. I can't be certain whether you've understood what I'm trying to do or not - this is not Ralph Vince's Optimal F, and I'm not trying to optimise anything. Here's an explanation of my thought process and goal: The base strategy uses zero leverage and whenever it is not flat it is fully invested. So the position size for that is simply Equity/Close. The problem is that two positions with identical equity available and the same entry price can have markedly different outcomes depending on "volatility" after entry. In my formula f is just a simple multiplicative function derived from past trade data (average-trade-length) and past price data (highest(h,average-trade-length) - lowest(l,average-trade-length)) that has on average been the best predictor of "volatility" average-trade-length periods into the future. In every case I have examined f has been close to 1 (ie the best predictor of future "volatility" is current "volatility"), so b*f is practically identical to b (my intention is that f could theoretically be replaced with some kind of higher order polynomial extrapolation) c is the average "volatility" for the entire data sample (because this takes a while to compute and the sample size quickly becomes too large for the max bars to reference in TS I have replaced it with a totally recursive LQE). Average "volatility" or c is then aligned with the base scenario (equity/close). Position sizing when b*f is higher than average should therefore be proportionally smaller, and when b*f is lower, proportionally larger, according to the following: positionsize = (equity/close) * (c/[ b*f ]); When "volatility" is higher than average, the formula calls for more units to be purchased than the equity allows with leverage; lower than average volatility is therefore simply ignored by capping position size at a maximum of equity/close. I would expect all this to result in reduced net gains but also reduced deviation of returns and a smoother equity curve. Somehow, it isn't doing that . . . Any suggestions as to where the issue may be? Thanks, BlueHorseshoe
  10. Hi MMS, EasyLanguage is adapted from Pascal (thanks to ZDO for telling me that), and I'm sure that this is/has been used by professionals in the industry at some time . . . C++ is certainly not ubiquitous - take a look at a few profiles of regulars over at quant.stackexchange.com. Here's a list of recognized languages from an application form for a quantitative fund: Java C C++ PHP Visual Basic Perl Python C# JavaScript Ruby Common Lisp MATLAB I think a lot probably depends on the suitability of the language for the particular task, and what everyone else in the environment is using. Ultimately the underlying concept value has got to be more important than the language it's coded in, right? Nobody is ever going to try and run an HFT operation using EasyLanguage (I hope!), but if you're essentially end of day trading then it should be able to do everything you need (in fact, you can probably do everything you need with paper and a calculator). Hope that helps, BlueHorseshoe
  11. Hello, Assuming that the best predictor of b at time t=-a is b * f, why isn't the following position sizing formula helping me to reduce the deviation of returns? a = average trade length; b = highest high ( a ) - lowest low ( a ); c = average b ( population size ); position size = ( equity / price ) * ( c / [ b * f ] ); Any help very much appreciated! BlueHorseshoe
  12. There's a lot of information about everything everywhere. The volume of information doesn't seem important to me - the information's utility is all that matters. The information contained in any timeframe is only as valuable as its utility permits, and that depends on the strategy being deployed. Information from one timeframe can, of course, have utility in other timeframes. It is sensible to focus on the information that provides the greatest utility for the strategy in question, wherever that information comes from. BlueHorseshoe
  13. Successfully exploiting what is visually/graphically an identical price movement on a daily chart will result in a greater net gain than exploiting the same price movement on an intraday chart, but the fixed costs will remain the same for both. Take the left hand chart you have posted above. Select an entry and exit point that would result in a profitable trade. If that chart were daily, you might have made 100 ticks per contract. If the chart were one minute, you might have made 2 ticks per contract. Both from what, graphically, is an identical price movement. If your fixed costs are 1 tick then you make 99 ticks in the first instance, and 1 in the second. The question is, can you perform the second procedure 99 times for every one time you perform the first? Or, is your risk sufficiently limited in the second for you to perform it once with 99x the position size of the first? Or 10 times with 9.9x the position size of the first etc? If you can do this then your returns will probably be less volatile (great for money management, and perhaps the main benefit of successful daytrading), and your "edge" will have more opportunity to play itself out. Expectancy x Opportunity is the key concept in understanding this. I agree though - visually, I would doubt that anyone can consistently distinguish between a daily and an intraday chart and, unless we're talking about execution-specific strategies, that there would be any merit to trading the one differently from the other. And of course, the more competent trader makes the most, as you say. BlueHorseshoe
  14. I wasn't claiming that prices are random (or that they are non-random); I was saying that it may make sense to trade as though they are always one or the other. If your statements above are true, then you could trade as though markets are always non-random, including through those brief periods when they are random. In short, I was arguing for simplicity in a strategy, rather than one which tries to distinguish between two different types of price behaviour (and all the complications that involves). BlueHorseshoe
  15. Hi ZDO, Please see my rambling reply above . . . BlueHorseshoe
  16. Hi Mitsubishi, I think of it in the following way . . . Assume that you have a "perfect" methodology for exploiting both random and non-random behaviour. Assume that I have a perfect method for exploiting only random behaviour. Price is behaving randomly, so we're both making money. As price behaviour begins to change to non-random, we both start to lose money. Eventually (unless you can also be assumed to have some sort of perfect zero-lag regime switching model - one assumption too many for me!), you will start to make money once more as you shift to your non-random methodology. You seem to have stolen a gain on me, until . . . The market begins to behave randomly again. From the very instant this happens I am perfectly placed to take advantage of it. You, meanwhile, are waiting for your model to recognise the shift. If the shift is not clean then your model gets whipsawed. My model would never get whipsawed - it's just 'right' or 'wrong'. Also, consider that even if we both make exactly the same, your switching model has expended more 'effort' than mine. So it's less efficient. Even if you have a sound method for identifying such shifts in something like real time, then other problems may arise: 1) Consistencyof returns is key to successful money management/position-sizing. In an ideal world, you want all trades to be as similar as possible - the closer every trade is to your avaerage trade, the better (this is why an outlier strategy that has higher single contract average returns as a second strategy may not perform as well as the second when both are optimally position-sized). How likely is it that your returns from each of your methodologies (random and non-random) are truly afine? 2) As per my previous post, the rules governing shifts from random to non-random may change, should they even exist in the first place. Most likely, such cycles are also random, at least some of the time . . . Whatever the case, introducing a reliance upon regime recognition into your approach is introducing another potential failure point (I'm sure engineers must have a term for such things). Put simply, the more things you try and respond to, the more chance you have of getting something wrong. If you just position yourself to exploit one type of behaviour, then you have no chance of getting it wrong when that type of behaviour prevails. I hope that wasn't too grumpy, and hope it makes some kind of sense, even to those who disagree! BlueHorseshoe
  17. Have a look at your commissions and spread (fixed, whether you're holding for a thousandth of a second or 3 years), and you'll be hard pressed not to BlueHorseshoe
  18. I agree, they do seem to. But are these cycles you describe random, non-random, or does that depend on some further 'meta-cycle' of randomness/non-randomness? I find the idea of a strategy that discriminates between random and non-random market behaviour uncomfortable, but that's just me. I think it makes better sense to assume that the markets are always either random or non-random (a clearly false assumption), and then to find approaches that balance losses and gains favourably during those periods when this assumption is false. I'll happily explain why if anyone is interested. BlueHorseshoe
  19. Hello, It's fairly simple really: perfectly random behaviour is highly predictable. The probability of a long sequence of similar events is very small. You can bet against the continuation of such sequences as they unfold. Please do note that I am saying the explanation to your question about randomness is simple, and NOT that trading is simple! Regards, BlueHorseshoe
  20. Hi Deej, In very simple form your code needs to include the Darvas Box code, and would then look something very roughly like this: If condition1 then Buy next bar at btt {btb} stop; And in the case of a shooting star, defined as condition2, then like this: If condition2 and H[1]>=btt[1] and btt=btt[1] then Buy next bar at btt stop; Hope that helps, and if you have any questions I'll check back tomorrow. Kind regards, BlueHorseshoe
  21. Hi Shooly, You've probably answered your own question above . . . I have no experience with CL, but as far as currencies versus an index e-mini goes, then it's probably a reasonable assumption to suggest that currencies trend (or 'breakout') more than indices do. I assume this is because a currency (or frozen orange juice, or lean hogs, or crude oil etc) represents the actual price of something, whereas an index doesn't - it represents the aggregate value of a whole bunch of things which might have something in common (they might all be US stocks, say), but also have seperate and unique influences upon their price. Take a look at the behaviour of a sector index such as XLV, for example, which represents stocks that have a lot in common, versus something like VTI, which is based on the value of a whole bunch of arguably dissimilar stocks. Which trends more? Try combining a whole bunch of completely un-related instruments into one big, hypothetical index - what do you see? Lots of noise, most likely, as the index is not representative of any real, concrete thing. If you wanted to trade mean-reversion then a basket tracking such an index might be an idea, but for trend trading it would probably be poison. You might find it useful to define some measure for how breakout-prone an instrument is in your chosen timeframe (the percentage of the range of each bar that occurs outside the range of the prior bar is one simple measure that I like), and then compare each market using this metric. Finally, remember that the FX vs Futures dichotomy is misleading: a currency future will trend in a very similar manner to the relevant cash pair. Hope that helps, BlueHorseshoe
  22. Hi Suby, It might be worth searching through some of Jeff Swanson's articles on this site - he has posted useful and well tested ideas for regime switching models incorporating concepts like hysteresis. If you just want to know whether to be long or short only, then you can do a hell of a lot worse than a simple moving average - the problem is always arriving at a solution that works and is not curve-fitted. Regards, BlueHorseshoe
  23. Welcome to TL, Investor! I have come across more efficient methods in older books (eg. Thomas Stridsman's 'Trading Systems that Work'), in which the data is exported wholesale from tradestation, but have never been able to get them to work. Currently, I do almost exactly what you're doing, as I too have only ever needed end of day equity data. Still, it's clunky, and wide open to human error . . . Hopefully Tams or someone else might have some suggestions? BlueHorseshoe
  24. I got some useful ideas from that - thanks! BlueHorseshoe
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.