gravatar for Mensur Dlakic

5 hours ago by

USA

There is nothing that sounds like overfitting here as long as you use the same folds for each classifier. But it does sound unnecessary to do it this way. This is essentially what boosting does by differentially weighting in next iteration what was misclassified in previous. I presume the reason they are using different features is to create non-overlapping expertise between individual classifiers, which is a good idea in general. Still, gradient boosting trees do all of that automatically, including feature selection, so you could save yourself some time and probably come up with a better classifier in the end by simply going with one of classifiers that uses (extreme) gradient boosted trees. As much as I like SVMs - for historic and practical reasons - I haven't done a single project in the past 10 years (at least hundreds) where SVMs outperformed boosted trees, either in terms of training speed or classification/regression performance. Unless you have a small dataset where proper classifier calibration is essential, I can't imagine that in your case it would be any different.



Source link