Jump to content

Welcome to the new Traders Laboratory! Please bear with us as we finish the migration over the next few days. If you find any issues, want to leave feedback, get in touch with us, or offer suggestions please post to the Support forum here.

  • Welcome Guests

    Welcome. You are currently viewing the forum as a guest which does not give you access to all the great features at Traders Laboratory such as interacting with members, access to all forums, downloading attachments, and eligibility to win free giveaways. Registration is fast, simple and absolutely free. Create a FREE Traders Laboratory account here.

BlueHorseshoe

Learning Vs Curve-fitting Vs Lag

Recommended Posts

Having systems that 'learn' from and adapt to changes in market behaviour seems like a great idea, but . . .

 

  • If a system is too receptive it learns too readily and curve-fits to inconsequential noise in the price data.
     
  • If a system seeks to avoid this by using large data samples to make robust generalised inferences about market behaviour, there is a risk that its resistance to change will seriously lag any significant but vital shift in behaviour.

There is necessarily a "sweet spot" between the two that indicates the optimal learning rate for the system.

 

However, to find this "sweet spot" one is faced with the precise same problem that I identified above. To mediate between the curve-fitted solution and the lagging solution a new variable must be introduced and it too must be mediated with some criterion for optimisation . . .

 

I can't find anything in the machine learning literature that I've read that suggests a viable way out of this catch-22. Does anybody have any suggestions?

 

BlueHorseshoe

Share this post


Link to post
Share on other sites
Having systems that 'learn' from and adapt to changes in market behaviour seems like a great idea, but . . .

 

  • If a system is too receptive it learns too readily and curve-fits to inconsequential noise in the price data.
     
  • If a system seeks to avoid this by using large data samples to make robust generalised inferences about market behaviour, there is a risk that its resistance to change will seriously lag any significant but vital shift in behaviour.

There is necessarily a "sweet spot" between the two that indicates the optimal learning rate for the system.

 

However, to find this "sweet spot" one is faced with the precise same problem that I identified above. To mediate between the curve-fitted solution and the lagging solution a new variable must be introduced and it too must be mediated with some criterion for optimisation . . .

 

I can't find anything in the machine learning literature that I've read that suggests a viable way out of this catch-22. Does anybody have any suggestions?

 

BlueHorseshoe

 

In 'plain' English:

That "sweet spot" is really not very sweet.

It is hard (that’s an understatement) to find a “variable” (actually a set of variables) that will consistently “mediate” ie

It is hard to stay close to that "sweet spot" … and worst of all

Even close to it produces sub average results

 

I chose to forego the large sample side (your second dot) and commit to the very granular side (your first dot).

Some constructs and ‘beliefs’ underlying my gestalt:

I had to resist the concept that there is “inconsequential noise” . I ‘know’/’believe’ there is “inconsequential noise” - but in my r&d, I had to act / proceed as if it didn’t exist. In the end, changes in the noise turned out to be pivotal information.

 

I personally left ‘signal generating’ machine learning to others and specialized in ‘categorization’ algorithms … which had to be further specialized to weighting simultaneous categories instead of narrowing it to one category from a set of discrete categories.

 

A lot of the info to be gathered from price streams for me turned out to be measuring micro swing scaling … what could be seen (in very loose terms) as fractional dimensions. I say very loose because the term fractional dimensions gets at capturing the concept, but it is not about using the ‘real’ fractional dimensions that Sevcik, et al calculate.

 

A lot of my progress came from just lucking into code for several excellent ‘music typing’ machine learning programs that helped me conceptualize the combinations of variations of cadence, ‘timbre’, tone, etc. for transfer over to granular price and volume data.

 

In the intraday time frames I work with, the half life of a ‘regime’ of these simultaneous categories is very short. A lot of plain old testing went into projecting the probabilities of what array would appear next. Then, detecting and loosely categorizing the noise, in effect, gives me ballpark weighting to slide the sizing around of a portfolio of (some pretty dumb, simple) systems. Sliding the weighting around more accurately makes me money by saving me money… especially in early detection of beginning and ends of congestions.

 

 

 

 

 

 

…there is some obscure work out there about formalizing the “sweet spot”. If I get some time will see if I have anything in the archives… but can’t even think of what terms to start searching on at this point…

 

Suggestion: Find your own way. It may be focusing in your first ‘dot’ above. It may be in the second dot. Or it may be in finding that “sweet spot” between the dots. In my experience – the one that inspires you most will at least engender the most perseverance and creativity. Hopefully, that one also fits with your aptitudes and talents...

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.