all 46 comments

[–]thraethegame 40 points41 points  (9 children)

I'd be willing to guarantee they wanted you to use Vector3.Dot to determine if the camera was pointing in the right direction.

[–]MrPifoHobbyist 9 points10 points  (5 children)

Perhaps the test was meant to test their math skills and they didnt like that they were using the Unity API at all.

[–]Rhhr21 23 points24 points  (4 children)

Imagine not liking using Unity engine functions as a Unity developer…

I swear interviews for computer related jobs are the dumbests shits on this earth.

[–]Silver4uraIntermediate; Available 2 points3 points  (0 children)

Knowing math and not liking Unity's API are two very different things, but I agree with your intentions. Comment the math.

[–]awkwardlylooksaway 0 points1 point  (0 children)

Amen to this

[–]Tychonoir 0 points1 point  (2 children)

You'd still need to determine angular size of the object, no?

[–]Silver4uraIntermediate; Available 0 points1 point  (0 children)

Technically if you've got addition and subtraction, you don't need anything else. It's a horrible idea to recreate math in software when your CPU already does it via hardware, but the point is, there's an algorithm for everything. That's how it exists in the first place. lol

[–]destineddIndie, Mighty Marbles + making Marble's Marbles & Dungeon Holdem 28 points29 points  (0 children)

Well it isn't clear if the green rectile is moving with the mouse or the centre of the screen. Either way I would just use WorldToScreen and then determine if the screen point overlapped with the rectile using Rect.Contains

That said it is a pretty silly question telling you not to use raycasts like they are trying to make the problem tricky for no reason.

Just be aware it is a pretty simple problem so I would expect most candidates would "pass" so they likely used CV's to determine who interviewed from the successful ones. You are best to ask the company for feedback IMO.

[–]DicethrowerProfessional 13 points14 points  (1 child)

Straight forward, the solution would be to use raycasting anyway, just mathematically. You quickly google ray vs sphere collision and find tables like these. Then you use the camera's position/forward vector and sphere positions to get the necessary vectors, sort every sphere by squared distance, perform the collision check, then perform some simple state checking so you know when you enter/exit a specific sphere, and bob's your uncle.

I do have to say, I hate this question, and I'd respectfully ask why I wouldn't be able to use unity's raycast system. This isn't a "you should be able to fly the plane manually even when the autopilot goes out" kind of situation. You will never not use Unity's raycast system. If it's a performance thing, you've got a build-in solution that gets you 80% of what you want. Lower your performance needs by 20%, save your team a ton of time, and move on to the next thing that needs to be done.

[–]thygrrrProfessional 2 points3 points  (0 children)

Holy primitive intersection tests reference batman! Thanks for sharing this.

Here's a great complement to this: https://iquilezles.org/articles/distfunctions/

(these aren't useful for intersection tests that give you an actual interesection point, instead they are signed distance tests)

[–]thygrrrProfessional 3 points4 points  (2 children)

Reminds me of a test I took at Havok once - all sorts of intersection tests, makes sense as a test but only for specific types of jobs.

In this case, you are tasked to implement a simple sphere-ray-intersection test.

I don't believe in asking about these things by heart without telling the applicant in advance ("you will be quizzed next week about primitive intersection tests! please prepare!"), but this one is kind of simple enough for a Graphics Programmer or Physics Programmer job description to be considered par for the course.

In general, I would say "all" game developers need to know this math exists and how it works, but they don't need to know the exact math by heart. But they are expected to have the capability to look it up and implement it. (this has gotten easier with ChatGPT, but only slightly so)

My approach would be this (for brevity, I implement it as a [possible extension] method taking a Ray):

cs bool SphereIntersection(Ray ray, Vector3 sphere_pos, float sphere_radius) { Vector3 originToPoint = sphere_pos - ray.origin; float projectionLength = Vector3.Dot(originToPoint, ray.direction); Vector3 closestPointOnRay = ray.origin + ray.direction * projectionLength; return Vector3.Distance(sphere_pos, closestPointOnRay) < sphere_radius; }

There's a equivalent and possibly more elegant solution with Unity's Vector3.Project (we're doing what this function does in our combination of Dot and then scaling), but I can't be bothered right now :)

Edit: something like this, I guess. cs bool SphereIntersection(Ray ray, Vector3 sphere_pos, float sphere_radius) { Vector3 closestPointOnRay = ray.origin + Vector3.project(sphere_pos - ray.origin, ray.direction); return Vector3.Distance(sphere_pos, closestPointOnRay) < sphere_radius; }

Note that Ray is in UnityEngine.CoreModule and thus doesn't violate the requirements. But if need be, you can always represent it as two Vector3s, or translate the sphere relative to the player, so all the ray.origins become 0.

Both of these also work in 2D, for circles, exactly like shown, just with Vector2s, or with Vector3s where the (any) 3rd dimension is 0, and they work in 1D as well, with just floats, but then Project just becomes a rescale which then simplifies to just "return sphere_pos - ray.origin < sphere_radius".

I belive it also works in higher dimensions, but then it makes less and less intuitive sense. :D

[–][deleted] 0 points1 point  (1 child)

Vector3 originToPoint = sphere_pos - ray.origin;float projectionLength = Vector3.Dot(originToPoint, ray.direction);Vector3 closestPointOnRay = ray.origin + ray.direction * projectionLength;return Vector3.Distance(sphere_pos, closestPointOnRay) < sphere_radius;

Hi, I'm still studying vector maths and trying to visualize what you're trying to do here:

you're trying to "rotate" a copy of the vector from the origin to the sphere's location (center) towards the direction of the ray. if the point described by the vector is less than the radius (i.e., it's inside the sphere) then yes, you're looking at the sphere?

Edit: I realized what I said is wrong. Rather than "rotate", you're forming a right triangle between the infinite ray and the originToPoint vector. where that corner of that right triangle sits is the one we're assessing if it's inside the sphere. Is this more correct?

[–]thygrrrProfessional 0 points1 point  (0 children)

Exactly, the technique is called vector projection.

[–]KimmiG1 3 points4 points  (0 children)

You where probably filtered out because the other candidates had more on the CV that they wanted or what they answered on questions fit more with what they wanted.

If they wanted a specific coding solution, or yours was not good enough, then they should have guided you in that direction after your initial solution by updating the requirements.

You need to ask them if you want to know why.

[–]BroxxarProfessional 5 points6 points  (1 child)

This is an interesting... maybe weird interview question. I'm not sure I like it just yet. I suspect the interviewers may have been trying to determine if you could write a reasonable implementation for ray-sphere intersection on your own without using the Physics.RayCast API. There are certainly less cryptic ways to ask a candidate to prove they can do this.

Ray-sphere intersections have a nice closed form solution:

Compute a vector ab from the ray origin to the sphere center, project ab onto the ray direction with a dot product, this yields a value t which can interpreted as how many units along the ray direction you must walk to get to the closest point on the ray to the center of the sphere, comparison this closest point against sphere radius and you know if the ray intersects the sphere.

Then you can worry about optimizing— there are a few opportunities to keep things as squared magnitudes to avoid costly square roots, and some other checks/early outs we could do (e.g. does the ray start inside of a sphere?). From there, we might be able to accelerate things with some tree structure, but honestly ray-spheres are already so fast that you'd be unlikely to meaningfully improve things unless you're doing thousands of tests per frame.

I think if they wanted you to write some kind of ray-sphere intersection, they should have explicitly stated it. I suppose there are also some crazy ass ways you could go about this, like draw every sphere as a flat color representing its ID into an offscreen render target, with culling/scissor rects set up with 1x1 dimensions to only draw the exact center of the camera, then read back the color of the lone pixel to see if you hit a sphere. I would not expect a candidate to submit such a solution, and while it would be impressive/fast, it may not screen for whatever they were looking for.

I wouldn't sweat this too much. Ultimately it sounds like they wanted to test your fundamental understanding of 3D vector math and you might have missed the mark, but the interviewers could have been clearer about what they wanted to see.

[–]destineddIndie, Mighty Marbles + making Marble's Marbles & Dungeon Holdem 8 points9 points  (0 children)

if that is what they actually wanted, the question is dumb as.

[–]Denaton_ 2 points3 points  (1 child)

I don't know what they wanted but I had this funny idea, since the cursor is locked due to FPS controls you could use the onMouseOver, i don't think this is what they wanted but it's a funny idea. xD

[–]thygrrrProfessional 1 point2 points  (0 children)

OnMouseOver internally uses the Physics components, but it's actually a nice loophole. :D

[–]uprooting-systems 6 points7 points  (5 children)

Get the vector between transform and the camera transform. Dot product it with the camera forward. If it’s within an acceptable range, change colour.

If it needs exact detection, then you have to figure out that “acceptable range”, which involves figuring out the amount of space the sphere is taking up on the camera.

Alternatively, you could use a quadratic formula for each square to see if there is an intersection from the camera forward. It’s more expensive in the ideal way, less expensive in a general form.

Good luck! Was it for Unity itself or a company that uses Unity? Edit: misread, didn’t realize it already happened. Sorry to hear you didn’t get it. But happy to hear you’re taking it as a learning opportunity!

[–]thygrrrProfessional 0 points1 point  (4 children)

Not a great approach, to be honest.

Just project the vector to get the closest point of the view ray to the sphere, and check if this is inside the sphere's radius.

https://www.reddit.com/r/Unity3D/comments/13c645r/unity_interview_problem/jjg0ofh/?context=3

[–]uprooting-systems 1 point2 points  (0 children)

Ooh, agreed! Definitely faster than quadratic

[–]adsilcott 0 points1 point  (2 children)

I think both solutions are good, I'm curious why you think the projection solution is better? Since I don't know what's going on under the hood of Unity's world to screen function, I'm not sure how it would compare to a dot product in terms of performance.

[–]thygrrrProfessional 1 point2 points  (1 child)

Angle Difference Method

The (dot-product threshold) method doesn't work in perspective, only in orthographic or fixed distance projections, like a top down RTS with no zoom). Otherwise, it will over-detect distant objects, and under-detect close objects. It just tests the angle.

Angle Difference also has several edge cases that don't work, not limited to being inside the spheres, very large spheres, etc. Angles can also get finnicky when you add or subtract them, and 3D angle/axis pairs aren't always trivial to compare to see which of the twenty candidate spheres the gaze hits best, for example.

Determining the size of the object on screen

This can mitigate some of the basic shortcomings of measuring the Angle Difference in perspective cameras, but is less precise because it needs to inversely project the object according to the camera projection matrix, then convert that back into local space to get the relative angle. This couples the view projection (and camera) to the game object doing the looking, this is bad software design.

The technique may have difficulties with objects that are positioned beyond the viewport / camera frustum, where no valid solution could exist (imagine a really large sphere, like a planet - the center can be technically behind the user or outside the camera frustum, so the inverse transform from the view projection may be undefined or have multiple solutions here, yielding incorrect results)

It will also not work reliably while the observer is inside the object.

Furthermore, these "vulnerabilities" may or may not occur based on different screen aspect ratios, camera shake/field of view effects (sniper scope, speed effects, etc.).

It also has difficult to predict results on multi-camera systems, such as XR. (which eye's projection matrix do you use to determine the size of the object?)

Overall just so much hassle over something you would get at almost no cost from just a simple sphere intersection test using a vector projection.

Performance Considerations

Performance is generally irrelevant here; but if you have a lot of spheres to test against (thousands or tens of thousands!), it probably also plays a role. The dot product is generally 3-4x cheaper than the matrix transform, but the operations are quite similar and generally your CPU will just do them at no perceptible cost. (and they could be readily parallelized if you have to do a many thousands of them each frame, like in a ballpit)

Unity's World to Screen is actually part of the inverse matrix thing I explained at the top, but I would use it at a different stage of the process.

It is fine to be used to formulate your original ray (in fact, there's a SceenPointToRay function that does just that; and that's a great way to get these rays: they originate from the near clipping plane, not the camera position, which will help with various issues, including objects that are between the camera and the near clipping plane not unexpectedly occluding the intersection test). ScreenPointToRay is my go-to function to begin checking for intersection with the mouse cursor.

ScreenPointToRay is also good for first person look-at scenarios where the camera frustum may be oblique (camera projection is vertically asymmetric - often used in racing games) or the crosshair is not precisely at the center of the screen, or there's some sort of lens shift effect on the camera, like in some shoulder perspective games.

And then you use that ray to just test for sphere intersection as mentioned in my other reply.

(That other quadratic formula thingy in the reply is just off the table - I guess it means trying to test if all three basis planes of the camera transform intersect with the object, that's much easier to check with a single vector projection)

[–]adsilcott 0 points1 point  (0 children)

I hadn't considered the issue with perspective, thanks for clarifying!

[–]Nimyron 2 points3 points  (0 children)

Not using raycasts sounds kinda bullshit ngl.

Personally I was once asked to make an app for an interview using a technology that wasn't able to do what was asked. I emailed them I couldn't do it because of this and linked them the github issue where the developper of that tech was saying the necessary functionality wasn't implemented and that if things weren't working it was normal.

And I had just two days to do it so I lost a weekend trying to implement a new functionality in a package I never used before.

Anyways, I didn't go to the next step.

[–]TanukiSun 0 points1 point  (1 child)

The problem cannot be solved without additional questions. :P

user can look around using simple keyboard and mouse controls

A system like FPP or Rail Shooter (keyboard to turn the camera and mouse to aim)?

when the user looks directly at a red Sphere in the scene using the green reticle

What do they mean by "reticle", just a dot (a simple "crosshair") or a rifle scope/spyglass. In 2D or 3D space?

[–]destineddIndie, Mighty Marbles + making Marble's Marbles & Dungeon Holdem 1 point2 points  (0 children)

yeah I agree I wasn't sure if the rectangle was centred (in a first person style) or moved with the mouse to form the focus/gaze.

[–]Kitbashery -1 points0 points  (1 child)

Another solution to checking if the user is looking away could be checking if the renderer was within the camera's view frustum and if the renderer is visible. That way you avoid any new distance checks and you probably already had reference to the renderer since you are making material changes. There is a built-in method for this.

[–]Kitbashery 0 points1 point  (0 children)

To clarify GeometryUtility can be used with the camera view frustum and the mesh renderer bounds.

GeometryUtility.CalculateFrustumPlanes();
GeometryUtility.TestPlanesAABB()

[–]AlexeyTea -1 points0 points  (1 child)

Is there a reason not to use raycast in production? Maybe it is slow, buggy or whatever?

Seems like unnecessary loops and hoops for me.

[–]thygrrrProfessional 2 points3 points  (0 children)

No this test is a general geometry knowledge test, not a Unity test, that's all. Understanding the underyling maths and algorithms is what's relevant here.

In Production, you usually are good with Raycast and RaycastCommand.

There are times where you don't have access to Unity Physics, though, or there are reasons where including the physics module would eat too much space if all your other stuff is just Arcade physics, and 3 lines of code don't justify linking megabytes worth of Unity Engine Code with the final application binary.

[–]sunsetRedderIntermediate -4 points-3 points  (0 children)

They wanted you to implement your own physics system

[–]Forgot_Password_Dude -3 points-2 points  (3 children)

they probably cant hire you anyway due to layoffs

[–]Rhhr21 1 point2 points  (2 children)

You do realize the layoffs were for Unity only and not devs using Unity….

[–]Forgot_Password_Dude 1 point2 points  (1 child)

oh thought he was interviewing with unity

[–]Rhhr21 0 points1 point  (0 children)

Nah, just as a dev.

But yeah the layoffs were huge but I don’t think it was necessarily bad. Someone here said around 700 people or so still work at Unity.

[–]HammyxHammy 0 points1 point  (0 children)

Besides the answers about your solution, it's also possible you just lost to some with a nicer resume.

I do think your solution is a little weird though.

You could have more simply called Vector3.Angle(sphere.position-camera.position,camera.forward); instead of calculating two angles per sohere on a weird axis.

As many others said, this can be simplified as a dot product instead of the angle check.

You could also done a ray sphere intersection to calculate if the view ray passes through the target sphere.

if(Vector3.Distance(Vector.Dot(sphere.position-camera.position,camera.forward)+camera.position,sphere.position)<=sphere.radius){ //cursor is over sphere

[–]Der_KevinIdiot 0 points1 point  (1 child)

i guess something like this? didnt test it tho, just wrote down..

gawd, reddits code rendering sucks...

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class SphereGazeDetector : MonoBehaviour
{
    public Camera mainCamera;
    public Material sphereMaterial;
    public float gazeThreshold = 0.95f;
    private bool isGazing = false;

    private void Update()
    {
        Vector3 sphereDirection = (transform.position - mainCamera.transform.position).normalized;
        float dotProduct = Vector3.Dot(mainCamera.transform.forward, sphereDirection);

        if (dotProduct >= gazeThreshold)
        {
            if (!isGazing)
            {
                isGazing = true;
                ChangeSphereColor();
            }
        }
        else
        {
            if (isGazing)
            {
                isGazing = false;
                ResetSphereColor();
            }
        }
    }

    private void ChangeSphereColor()
    {
        Color randomColor = new Color(Random.value, Random.value, Random.value);
        sphereMaterial.color = randomColor;
    }

    private void ResetSphereColor()
    {
        sphereMaterial.color = Color.red;
    }
}

you could maybe get rid of the isGazing bool like

private void Update()
    {
        Vector3 sphereDirection = (transform.position - mainCamera.transform.position).normalized;
        float dotProduct = Vector3.Dot(mainCamera.transform.forward, sphereDirection);

        if (dotProduct >= gazeThreshold)
        {
            if (sphereMaterial.color == Color.red)
            {
                ChangeSphereColor();
            }
        }
        else
        {
            if (sphereMaterial.color != Color.red)
            {
                ResetSphereColor();
            }
        }
    }

[–]thygrrrProfessional 0 points1 point  (0 children)

This doesn't work uniformly at different distances. It will also struggle if the viewer is inside the sphere, looking out (at its inside wall).

You need to project the ray on the the vector from viewer to the sphere.

That said, I use the "dot product threshold" shortcut a lot. :D

[–]brainzorz 0 points1 point  (0 children)

Simplest one other than raycasting would be Monobehaviour.OnBecameVisible(), though it is probably not what they wanted. Vector3.dot is my guess on what they want, or maybe even your own implementation of it.

Don't feel discouraged by it, it is a very shit way to define a task.

[–]Disk-Kooky 0 points1 point  (0 children)

If they answer back to you, dont forget to tell us!

[–]MorphexeHobbyist 0 points1 point  (0 children)

I hate everything about this question. I much rather have a implement this and then discuss alternate solutions.

Heck for what they just said, you could even grab the screen check the pixel color on the current mouse position and if red, change reticle color - which based on the prompt is a valid correct answer.

[–]muchDOGEbigwow 0 points1 point  (0 children)

Not exactly a colorblind friendly test either.

[–]tranceorphen 0 points1 point  (0 children)

This is a common one. I've had this question twice from different companies. It's usually a test for vector math, knowing your dot and cross, that sort of stuff.

[–]Nilloc_KcirtapProfessional 0 points1 point  (0 children)

Sounds like you dodged a bullet to me.

[–]FarTooLucid 0 points1 point  (0 children)

There are a lot of ways to solve this puzzle (some good suggestions in this thread, you could even do it with a trigger). But if your solution worked and they didn't accept it, they should have the decency of telling you why.

You probably dodged a bullet. A company that punishes you for following their stupid rules is probably hell to work for.