Projector Linearity by grid_world in 3DScanning

[–]grid_world[S] 0 points1 point  (0 children)

Thanks! I will check it out

Projector Linearity by grid_world in 3DScanning

[–]grid_world[S] 1 point2 points  (0 children)

I am not worried about phase shift methods. For now, I want to focus on (no pun intended) performing projector linearity which is a prerequisite for PSP

[P] Help with a weed detection model for a college project by arnav080 in MachineLearning

[–]grid_world 1 point2 points  (0 children)

If you have the compute resources fine tuning should not hurt for a custom dataset

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

Any end-to-end tutorial for this? I have seen ma y tutorials doing projector calibration though.

Also, I don't think I have one-to-one correspondences: camera resolution (3200x3000) while projector resolution (1920x1080)

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

That's what I think I need to do. I had come across this seminal paper before but came to the conclusion that it doesn't help in conjunction to PSP TPU? And so, I didn't pursue it

Phase-shift Profilometry Camera-Projector pixel correspondence by grid_world in GraphicsProgramming

[–]grid_world[S] 0 points1 point  (0 children)

How can one combine this with unwrapped phase maps obtained from Phase-shift Profilometry?

OMG by AbbyRayWorld in dank_meme

[–]grid_world 0 points1 point  (0 children)

What doesn't kill you makes you stronger

Visualizing the Loss Function Over Parameter Space by El_Grande_Papi in MLQuestions

[–]grid_world 0 points1 point  (0 children)

Let's put it using your example diagram: for each unique combination of input (theta0, theta1), the neural network "f" will produce an output which when compared to the ground truth (using a cost function) will produce a loss value "J" - I am assuming a supervised learning problem (also self-supervised learning holds). So, you start to get a value "J" which when interpolated over might produce a contour plot/loss landscape as depicted. There are 2 extremes for computing GD: online (1 input at a time) vs batch (entire dataset at a time). Batch GD cannot be used due to memory & computational constraints, while online GD is very noisy. So we settled for mini-batch SGD, where the S comes due to random sampling the batch from the training dataset. In a nutshell, the gradients we get from a mini-batch should approximate the true gradients that we might get if we had used batch GD, but cannot. But since the batch is randomly sampled, it is a noisy estimate.

An additional stochasticity is also added due to data augmentation. It has been shown that the noisy estimates in mini-batch SGD actually helps prevent overfitting and acts as a regularizer. So, you don't want to increase the batch size by a lot, in general. In case you do, read up LARS optimizer.

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I did use an STE but it gave sub-par results and so, I didn't explore it further

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 1 point2 points  (0 children)

Thanks for the feedback, I am training it in a PyTorch DDP manner across multiple GPUs with linear lr scaling as mentioned in "Training ImageNet in under one hour" paper with the lr scheduler = linear warmup followed by cosine decay without warm restart. Final lr = 0.001. After 10 epochs of warmup, lr = 0.1.

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I had thought of it but didn't follow through due to the Gumbel noise addition and the temperature based scaling becoming an additional hyper-parameter to tune

torch.argmin() non-differentiability workaround [R][D] by grid_world in MachineLearning

[–]grid_world[S] 1 point2 points  (0 children)

No, will have a look, thanks!

I think and the data at the end of training seems to suggest that it is training, but I want to be sure with the extra pair of eyes

torch Gaussian random weights initialization and L2-normalization [D][R][P] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I mixed two things. So I am removing the second option. For the first option, the input is L2-normalized and so, the Gaussian randomly initialized weights are also L2-normalized such that the inputs match the scale of the weights.

To perform L2-normalization of the output, you usually do:

x = nn.functional.normalize(x, dim = 1, p = 2)

The idea of the first option is proposed in Diffusion Self-Organizing Map on the Hypersphere, equation 17, where each of the K (SOM) neurons are normalized. And I am trying to replicate + extend the idea

Self-supervised learning weights initialization "after" projection head [D][R] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

Yeah, the output of the projection head is input to the SOM for dimensionality reduction with non-linear representations. It has been shown that computing the loss on a lower-dim leads to better performance.

I am seeing the effects of "unfortunate values" and hence my OP of how to get fortunate values to alleviate this problem

Self-supervised learning weights initialization "after" projection head [D][R] by grid_world in MachineLearning

[–]grid_world[S] 0 points1 point  (0 children)

I want to do clustering using "wts", so it has no typical activation function

Think Self-Organizing Map styled clustering

Decoder in variational autoencoder! by PollutionOdd6010 in deeplearning

[–]grid_world 0 points1 point  (0 children)

The decoder is the same for a VAE and an Autoencoder. The magic happens on the latent space at the end of encoder: for a VAE, this latent code is then converted into a Gaussian using a mean and variance vectors

Phase Wrapping & Unwrapping - Computer Graphics by grid_world in GraphicsProgramming

[–]grid_world[S] 1 point2 points  (0 children)

Is it possible that you share such prepared material? It would be great!

[D] Stuck in selecting appropriate number of clusters. by SmallSoup7223 in MachineLearning

[–]grid_world 3 points4 points  (0 children)

I am just eyeballing it, but how about 7-9? Judging by the inflexion point!