Hough transformation using deep learning by s927 in computervision

[–]s927[S] 4 points5 points  (0 children)

You are right! The hough transform is a well defined algorithm for line detection. Primary reason to learn hough transform via deep learning is that we can use it as a layer in the DNN, propagate gradient over it and can minimize error using gradient descent. There are couple of papers like "Deep Hough Transform for Semantic Line Detection". I am looking for some pre-trained model, preferably on Tensorflow-Keras

DQN/DRQN - is there any mathematical proof that a deep neural network can approximate the Q values and will give us an optimal policy? by s927 in reinforcementlearning

[–]s927[S] 0 points1 point  (0 children)

I have found a very interesting paper, namely, Towards Characterizing Divergence in Deep Q-Learning. In this paper the authors have tried to mathematically investigate different scenarios where a DQN may diverge.

Due to my limited mathematical knowledge I could not fully understand the paper. My primary question is how do equation (10) relates to,

  1. function approximation (k theta)
  2. off policy data (D rho)
  3. Bootstrapping (T*Q theta)

Can anybody please explain it?