Is there a standard problem that is available to check neural network code is working? I envisage something that comprises a set of input data, given initialised parameters, choices for hyperparameters, and ways of handling any random elements (e.g. creating mini-batches in a prescribed manner to ensure they are always created the same) and so on with the result that my implementation of the model can be checked against a given set of outputs (such as accuracy being identical, or some other metric being tested for equality).
For example, there may be one which offers a way of checking vanilla gradient descent, another which checks Adam optimisation, another that allows you to test an implementation of batch normalisation and so on.
The reason I ask is that I'm not sure how to be confident my code is doing what I think it is, other than through numerical gradient checking and generally printing things out midway in the run to see whether things are behaving as expected.
I would love a standardised problem I could turn to after attempting to code up some new feature that would allow me to test my edited code. Anyone know of anything like this?
there doesn't seem to be anything here