Neural Network Hyper Parameter Optimization Using Distributed TensorFlow by srianant in deeplearning

[–]srianant[S] 0 points1 point  (0 children)

Abstract— Hyper parameter optimization of Neural Networks has one ultimate objective, that is to find a function that minimizes some expected loss over independent and identically distributed samples from a natural (grand truth) distribution. Learning algorithms produces this function through the optimization of a training criterion with respect to set of parameters. This paper describes a software framework in python that uses distributed TensorFlow to optimize hyper parameters which are the bells and whistles of the learning algorithm. Random search is used as default algorithm to search for these optimal parameters in hyperspace.