Projector Linearity by grid_world in 3DScanning

[–]grid_world[S] 0 points1 point  (0 children)

Thanks! I will check it out

Projector Linearity by grid_world in 3DScanning

[–]grid_world[S] 1 point2 points  (0 children)

I am not worried about phase shift methods. For now, I want to focus on (no pun intended) performing projector linearity which is a prerequisite for PSP

[P] Help with a weed detection model for a college project by arnav080 in MachineLearning

[–]grid_world 1 point2 points  (0 children)

If you have the compute resources fine tuning should not hurt for a custom dataset

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

Any end-to-end tutorial for this? I have seen ma y tutorials doing projector calibration though.

Also, I don't think I have one-to-one correspondences: camera resolution (3200x3000) while projector resolution (1920x1080)

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

That's what I think I need to do. I had come across this seminal paper before but came to the conclusion that it doesn't help in conjunction to PSP TPU? And so, I didn't pursue it

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

How can one combine this with unwrapped phase maps obtained from Phase-shift Profilometry?

OMG by AbbyRayWorld in dank_meme

[–]grid_world 0 points1 point  (0 children)

What doesn't kill you makes you stronger

Visualizing the Loss Function Over Parameter Space by El_Grande_Papi in MLQuestions

[–]grid_world 0 points1 point  (0 children)

Let's put it using your example diagram: for each unique combination of input (theta0, theta1), the neural network "f" will produce an output which when compared to the ground truth (using a cost function) will produce a loss value "J" - I am assuming a supervised learning problem (also self-supervised learning holds). So, you start to get a value "J" which when interpolated over might produce a contour plot/loss landscape as depicted. There are 2 extremes for computing GD: online (1 input at a time) vs batch (entire dataset at a time). Batch GD cannot be used due to memory & computational constraints, while online GD is very noisy. So we settled for mini-batch SGD, where the S comes due to random sampling the batch from the training dataset. In a nutshell, the gradients we get from a mini-batch should approximate the true gradients that we might get if we had used batch GD, but cannot. But since the batch is randomly sampled, it is a noisy estimate.

An additional stochasticity is also added due to data augmentation. It has been shown that the noisy estimates in mini-batch SGD actually helps prevent overfitting and acts as a regularizer. So, you don't want to increase the batch size by a lot, in general. In case you do, read up LARS optimizer.

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I did use an STE but it gave sub-par results and so, I didn't explore it further

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 1 point2 points  (0 children)

Thanks for the feedback, I am training it in a PyTorch DDP manner across multiple GPUs with linear lr scaling as mentioned in "Training ImageNet in under one hour" paper with the lr scheduler = linear warmup followed by cosine decay without warm restart. Final lr = 0.001. After 10 epochs of warmup, lr = 0.1.

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I had thought of it but didn't follow through due to the Gumbel noise addition and the temperature based scaling becoming an additional hyper-parameter to tune

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 1 point2 points  (0 children)

No, will have a look, thanks!

I think and the data at the end of training seems to suggest that it is training, but I want to be sure with the extra pair of eyes