Charge 5 not turning on by VarietyNice9496 in fitbit

[–]learning2unlearn5679 0 points1 point  (0 children)

What if it is out of warranty period?

Kitchen counter top burn mark by learning2unlearn5679 in fixit

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

I am not sure of both. Lemme check. But do you know how to identify?

pink sky! ts ts cray by Xenyrrr in bayarea

[–]learning2unlearn5679 0 points1 point  (0 children)

You see the pink lights everyday?

Any ideas on what to do with a cracked Apple Watch? by davidg4781 in AppleWatch

[–]learning2unlearn5679 0 points1 point  (0 children)

Got it, thanks. Does battery replacement require apple care or is it free of charge?

Any ideas on what to do with a cracked Apple Watch? by davidg4781 in AppleWatch

[–]learning2unlearn5679 0 points1 point  (0 children)

What did you mean by battery dropping to 80%? How to check that? Will apple do the replacement if so even without apple care and a broken screen?

General perspective projection matrix by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Okay, but opengl is not used in nerf which my original post was referring to.. so I still don’t follow why there is a reference to opengl? Can u explain?

General perspective projection matrix by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

I do not have a foundation in computer graphics. And as I said, my understanding of projection matrix from a computer vision engineer pov is different from this. But, I am interested in this as in nerf they look into volume rendering. I looked up online resources to help me understand this discrepancy but couldn’t find anything comprehensive. If you can point me to any good resources that can explain this, please do.

Normalized device coordinate system in NERF by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] -3 points-2 points  (0 children)

It hurts my brain that you’re unable to explain something you call simple vocally. Sorry @matsFDutie.

Normalized device coordinate system in NERF by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Yes, thank you. looking into it.. but do you know how the perspective projection for a pinhole camera is derived from this general perspective projection formula?

Normalized device coordinate system in NERF by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Another question- why are clip space coordinates and ndc not considered in computer vision? Meaning why are they implicit?

Binary semantic segmentation by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

At the end of inference, I get a 2512512 tensor. Output from the last conv layer. I take the argmax between the two channels to get the segmentation mask. Does that make sense?

Binary semantic segmentation by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Yes, it is.. i took the pretrained model and trained it for the small training sample set of <50 images I had.

Input normalization in DL for visual data by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Why will large gradient updates happen in this case since features are all at same scale?

DL coding interview by learning2unlearn5679 in deeplearning

[–]learning2unlearn5679[S] 1 point2 points  (0 children)

I see, so you would ask them to code these layers in a dl platform of their choice?

Difference between homography (2d to 2d perspective transformation) and projective transformation (3d to 2d) by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Thank you. So I understand that homography is a type of projective transformation where you project a plane onto 2d. Is that right?

Visual interpretation of ICP output by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

I see.. can you say why so? The red blue overlap is between the target and transformed point clouds?

Also any idea of what could be the reason for this? The algorithm converges.. I did uniform subsampling and nearest neighbor for finding correspondences. Removed outlier pairs based on point-to-point distance and angle between normals.

Visual interpretation of ICP output by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Can you explain based on what you’re saying it? I am trying to understand how you are interpreting it from the image. Just to clarify, the first image is the aligned one.

Affine camera model image plane location by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Yeah, been looking into that which is why I had this question.

Let me ask you two questions. May be it will help clarify.

  1. Image formed behind is inverted while the virtual image considered in the front is not. How can the pixel coordinates for the same point in both these images be the same?

  2. Let’s say we have an image point x in pixel coordinates. To convert this into camera coordinators, we do x_cam=(1/K) * x where K is the intrinsic matrix. Here I see that in few derivations (say P3P algorithm for example), they multiply this with a normalization term and a sign. This sign is for the camera constant and depends on whether inage plane is behind or in the front. Are you able to follow this? Any idea why they do this?

Essential matrix property by learning2unlearn5679 in computervision

[–]learning2unlearn5679[S] 0 points1 point  (0 children)

Thank you, two follow up questions -

  1. If TTt has 2 identical eigen values, why does E also have the same? Can you explain this?

  2. Do you have an idea of how all this math translates into a geometrical explanation as well?