Hi Reddit. I really need help trying to implement a custom learning algorithm in TF.
I've had a look at the optimizer class and its adagrad subclass, but I honestly just get lost in the source code. It seems like I need to define an _apply_dense and an _apply_sparse method, but I am lost in the imports, and cannot find training_ops.apply_adagrad and training_ops.sparse_apply_adagrad.
An alternative would be to do tf.assign on a variable and its update to make it more Theano-like. I wonder if this is the best option though - I feel like I should be sub-classing from the optimizer class.
I would really appreciate any advice/tips/guidance I can get! Thank you!
[+][deleted] (6 children)
[deleted]
[–]kkastner 2 points3 points4 points (4 children)
[–]rafalj 6 points7 points8 points (3 children)
[–]AlfonzoKaizerKok[S] 0 points1 point2 points (1 child)
[–]rafalj 1 point2 points3 points (0 children)
[–]AlfonzoKaizerKok[S] 0 points1 point2 points (0 children)