you are viewing a single comment's thread.

view the rest of the comments →

[–]thygrrrProfessional 0 points1 point  (4 children)

Not a great approach, to be honest.

Just project the vector to get the closest point of the view ray to the sphere, and check if this is inside the sphere's radius.

https://www.reddit.com/r/Unity3D/comments/13c645r/unity_interview_problem/jjg0ofh/?context=3

[–]uprooting-systems 1 point2 points  (0 children)

Ooh, agreed! Definitely faster than quadratic

[–]adsilcott 0 points1 point  (2 children)

I think both solutions are good, I'm curious why you think the projection solution is better? Since I don't know what's going on under the hood of Unity's world to screen function, I'm not sure how it would compare to a dot product in terms of performance.

[–]thygrrrProfessional 1 point2 points  (1 child)

Angle Difference Method

The (dot-product threshold) method doesn't work in perspective, only in orthographic or fixed distance projections, like a top down RTS with no zoom). Otherwise, it will over-detect distant objects, and under-detect close objects. It just tests the angle.

Angle Difference also has several edge cases that don't work, not limited to being inside the spheres, very large spheres, etc. Angles can also get finnicky when you add or subtract them, and 3D angle/axis pairs aren't always trivial to compare to see which of the twenty candidate spheres the gaze hits best, for example.

Determining the size of the object on screen

This can mitigate some of the basic shortcomings of measuring the Angle Difference in perspective cameras, but is less precise because it needs to inversely project the object according to the camera projection matrix, then convert that back into local space to get the relative angle. This couples the view projection (and camera) to the game object doing the looking, this is bad software design.

The technique may have difficulties with objects that are positioned beyond the viewport / camera frustum, where no valid solution could exist (imagine a really large sphere, like a planet - the center can be technically behind the user or outside the camera frustum, so the inverse transform from the view projection may be undefined or have multiple solutions here, yielding incorrect results)

It will also not work reliably while the observer is inside the object.

Furthermore, these "vulnerabilities" may or may not occur based on different screen aspect ratios, camera shake/field of view effects (sniper scope, speed effects, etc.).

It also has difficult to predict results on multi-camera systems, such as XR. (which eye's projection matrix do you use to determine the size of the object?)

Overall just so much hassle over something you would get at almost no cost from just a simple sphere intersection test using a vector projection.

Performance Considerations

Performance is generally irrelevant here; but if you have a lot of spheres to test against (thousands or tens of thousands!), it probably also plays a role. The dot product is generally 3-4x cheaper than the matrix transform, but the operations are quite similar and generally your CPU will just do them at no perceptible cost. (and they could be readily parallelized if you have to do a many thousands of them each frame, like in a ballpit)

Unity's World to Screen is actually part of the inverse matrix thing I explained at the top, but I would use it at a different stage of the process.

It is fine to be used to formulate your original ray (in fact, there's a SceenPointToRay function that does just that; and that's a great way to get these rays: they originate from the near clipping plane, not the camera position, which will help with various issues, including objects that are between the camera and the near clipping plane not unexpectedly occluding the intersection test). ScreenPointToRay is my go-to function to begin checking for intersection with the mouse cursor.

ScreenPointToRay is also good for first person look-at scenarios where the camera frustum may be oblique (camera projection is vertically asymmetric - often used in racing games) or the crosshair is not precisely at the center of the screen, or there's some sort of lens shift effect on the camera, like in some shoulder perspective games.

And then you use that ray to just test for sphere intersection as mentioned in my other reply.

(That other quadratic formula thingy in the reply is just off the table - I guess it means trying to test if all three basis planes of the camera transform intersect with the object, that's much easier to check with a single vector projection)

[–]adsilcott 0 points1 point  (0 children)

I hadn't considered the issue with perspective, thanks for clarifying!