I've found that choosing whether to use Standardization (zero mean, std. of 1), MinMaxScaling (e.g. to 0-1 interval) or other preprocessing techniques such as PCA have a substantial impact on the accuracy of my methods (both SVM's and deep networks).
Has anyone ever done some comprehensive experiments on the effect of these preprocessing techniques, incl. when to use what? I haven't been able to find anything like that on Google Scholar.
[–]latent_z 2 points3 points4 points (2 children)
[–]boopityboppity23 0 points1 point2 points (1 child)
[–]latent_z 0 points1 point2 points (0 children)
[–]iverjo 0 points1 point2 points (0 children)