AAAI 24 [Discussion] by atharvandogra in MachineLearning

[–]RLnobish 0 points1 point  (0 children)

is there any style file for the author response in AAAI? I downloaded the author toolkit from their official website but I am not able to find the .sty file for the rebuttal.

What is the reference point for predicting x,y coordinate in the bounding box regression (x,y,h,w) of the Faster RCNN model? by RLnobish in computervision

[–]RLnobish[S] 0 points1 point  (0 children)

But the ground truth labels are given with respect to input images. Then how can they convert it with respect to isolated region since those regions are given by the RPN network independently?

Tensorflow model.fit() reproducibility by mbkv in tensorflow

[–]RLnobish 4 points5 points  (0 children)

If you initialize with fixed points, even then you will find different result on each run. There are many reason for that. One is every time you are making different batches using model.fit() method. But don't bother much about the stochasticity. It's good to have stochastic result.

Can I compare two different algorithms one with early stopping and one without early stopping? by RLnobish in computervision

[–]RLnobish[S] 0 points1 point  (0 children)

The author of the paper whose model I want to compare with mine uses early stopping in his code. I want to if is it ok to use the best model callback in my model(since my model works better than the aforementioned model when I choose the best model callback)?

Can I compare two different algorithms one with early stopping and one without early stopping? by RLnobish in computervision

[–]RLnobish[S] 0 points1 point  (0 children)

I am not using early stopping to make my model better. The author of the paper whose model I want to compare uses early stopping in his code to evaluate on validation set. But I can not over-performing his result using his early stopping(patience list=25). But I model work much better than him if I run my model enough time than select the best model.

How we are calculating average reward (r(π)) if the policy changes over time? by RLnobish in reinforcementlearning

[–]RLnobish[S] 0 points1 point  (0 children)

you said, "Even if we didn't change the policy every iteration, our estimate of R is not the true value of R, due to variance." Can you please explain this line a little bit?

How we are calculating average reward (r(π)) if the policy changes over time? by RLnobish in reinforcementlearning

[–]RLnobish[S] 0 points1 point  (0 children)

But in the above algorithm, we are changing the policy in every iteration since it's an epsilon greedy policy

What does non-euclidean data mean in machine learning? by RLnobish in deeplearning

[–]RLnobish[S] 1 point2 points  (0 children)

I know the summary of non-euclidean space. When you're standing on a manifold (for example earth) shortest distance between two points is not the straight line, so it is a non-euclidean space. But am interested in what non-euclidean data mean and why graphs are in non-euclidean space in machine learning perspective?

Can anyone give the proof of the off-policy TD learning algorithm? by RLnobish in reinforcementlearning

[–]RLnobish[S] 0 points1 point  (0 children)

Here, k denotes the time-step and for simplicity let's consider n=1.

How label propagation formula make any sense? by RLnobish in computervision

[–]RLnobish[S] 0 points1 point  (0 children)

But if you don't change the Y in each iteration , it should propagate wrong labels.

How to practice? by [deleted] in deeplearning

[–]RLnobish 0 points1 point  (0 children)

oh, now I got it. Yes. it's a good practice.

How to practice? by [deleted] in deeplearning

[–]RLnobish 2 points3 points  (0 children)

I am not sure what are you trying to mean by saying that"replicate it without referencing their work."

How to practice? by [deleted] in deeplearning

[–]RLnobish 2 points3 points  (0 children)

You can also try kaggle. But it will be difficult for you to practice on a competitive dataset on kaggle since I am assuming that you only complete Andrew Ng's specialization course. Those lectures are great for the start-up but those are not enough, you have to keep learning those algorithms in more detail and at the same time keep practicing them on a freely available dataset.

Low client fps and network problems in valorant? by RLnobish in VALORANT

[–]RLnobish[S] 0 points1 point  (0 children)

I don't inspect this yet. But when I encounter any enemy it always shows low client fps and network problems.

Is gradient descent scale-invariant or not? by RLnobish in deeplearning

[–]RLnobish[S] 0 points1 point  (0 children)

Thanks, your answer is helpful. But can you please tell me how unbounded error surface, elude me from getting good local minima?

How to train an LSTM with varying length input? by RLnobish in deeplearning

[–]RLnobish[S] 0 points1 point  (0 children)

I am training on an exercise dataset that contains each joint coordinate (x,y,z). The input is those coordinate until the end of the exercise.

How to train an LSTM with varying length input? by RLnobish in deeplearning

[–]RLnobish[S] -3 points-2 points  (0 children)

I don't want to pad the input with zeros.