You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Has there been thought to build the optimization in a manner similar to sklearn where you can designate a train and validate upset? I think a lot of issue with training indicator strategies is the overfitting that occurs when optimization. If you are interested in my approach, I am trying to work on one, and we going to see if I could incorporate it into your existing coding framework.
EDIT: The more I think about this idea, the less it seems like a truly feasible idea in the usual indicator testing sense. Optimization as it stands is basically doing this because there are no weights to minimize or parameters to tune.
My thought to this, and I would be curious what other people thought, as it pertains to defeating overfitted strategies, is comparing say training and test set results and looking for the parameters that will make the results of the train and test equal as close to 1.
Optimum Strategy = (Return [%] from Training) / (Return [%] from Test) ; Parameters that make this value closest to 1
Then you can test the new strategy parameters against a Validation set. If you are still getting favorable returns, then you could be confident your strategy is not overfitted.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Has there been thought to build the optimization in a manner similar to sklearn where you can designate a train and validate upset? I think a lot of issue with training indicator strategies is the overfitting that occurs when optimization. If you are interested in my approach, I am trying to work on one, and we going to see if I could incorporate it into your existing coding framework.
EDIT: The more I think about this idea, the less it seems like a truly feasible idea in the usual indicator testing sense. Optimization as it stands is basically doing this because there are no weights to minimize or parameters to tune.
My thought to this, and I would be curious what other people thought, as it pertains to defeating overfitted strategies, is comparing say training and test set results and looking for the parameters that will make the results of the train and test equal as close to 1.
Optimum Strategy = (Return [%] from Training) / (Return [%] from Test) ; Parameters that make this value closest to 1
Then you can test the new strategy parameters against a Validation set. If you are still getting favorable returns, then you could be confident your strategy is not overfitted.
Beta Was this translation helpful? Give feedback.
All reactions