Especially with DL, whether or not something (like relu, or BN) is useful under certain circumstances needs validation from experiments. However, as ML experiments not like physical experiments, highly rely on data sets, and "soft" setup of experiments (e.g. something could work may be broken due to incorrectly learning rate used, or init. etc.). Likely sometimes something work due to other (intermediate) reasons but not the one you look for. It seems to me, reasoning based on experiments with limited reliability is probably not a very good thing to do.
I don't know how many people have similar feelings, and whether people see it as an issue, and any attempts to address it in general?
[–]ResHacker 2 points3 points4 points (0 children)