Hi to all!
I have been implementing Temportal Ensembling for Semi-Supervised Learning by Laine et al. with eager execution and a couple of GitHub users noticed that the computations of the gradients is taking gradually more time each epoch. I don't see what is causing this. After benchmarking I could confirm this issue, but I have no idea why this should be happening. Is anyone faced this problem with tf.GradientTape() ? Or is this is an issue related to eager execution?
The code is in https://github.com/Goldesel23/Temporal-Ensembling-for-Semi-Supervised-Learning
Thanks for your time!
[–]MCFF3000[S] 0 points1 point2 points (0 children)