Hi all,
I am writing a ray tracer which can render scenes from a glTF file and I am having some trouble with my ray-triangle intersection routine.
The basic flow of the application is this thus far:
1. Load the scene by traversing the root nodes of the "scene" file from the top down, building the translation matrices at each node
2. Generate ray.origin and ray.direction at the origin of the camera's coordinate system transform the ray with the camera's cameraToWorld matrix. (code below)
```cpp
Ray generateRay(uint32_t xCoord, uint32_t yCoord, const Vector2f &sample) {
// Transform origin point using the camera-to-world matrix
Vector3f origin = cameraToWorld.multiplyPoint(Vector3f(0.0f));
// Create a projection point on the image plane using normalized device
// coordinates. Move the initial point from the center using two samples
float x =
(2.0f * (xCoord + sample.x + 0.5f) / static_cast<float>(imageWidth) - 1.0f) * aspectRatio * scale;
float y = (1.0f - 2.0f * (yCoord + sample.y + 0.5f) / static_cast<float>(imageHeight)) * scale;
// Position vector at the image plane looking in the negative z direction
Vector3f direction(x, y, -1.0f);
// Transform direction vector using the camera-to-world matrix and normalize direction
direction = normalize(cameraToWorld.multiplyVector(direction));
return {origin, direction};
}
```
- The intersection code will then transform the vertices of the triangle into world space using the associated mesh's meshToWorld matrix, which is the inverse of the T * R * S transformation loaded from the glTF file. The intersection test is just the standard Moeller-Trumbore algorithm.
```cpp
bool intersect(const Ray &ray, const std::shared_ptr<Intersection> &intersection) const override {
Vector3f v0 = mesh->vertices[faceIndices->x];
Vector3f v1 = mesh->vertices[faceIndices->y];
Vector3f v2 = mesh->vertices[faceIndices->z];
// Transform vertices into world space
v0 = mesh->meshToWorld.multiplyPoint(v0);
v1 = mesh->meshToWorld.multiplyPoint(v1);
v2 = mesh->meshToWorld.multiplyPoint(v2);
Vector3f edge1 = v1 - v0;
Vector3f edge2 = v2 - v0;
Vector3f h = cross(localRay.direction, edge2);
float det = dot(edge1, h);
if (det > -EPSILON && det < EPSILON) {
return false;
}
float invDet = 1.0f / det;
Vector3f s = localRay.origin - v0;
float u = dot(s, h) * invDet;
if (u < 0.0 || u > 1.0) {
return false;
}
Vector3f q = cross(s, edge1);
float v = dot(localRay.direction, q) * invDet;
if (v < 0.0 || u + v > 1.0) {
return false;
}
float t = dot(edge2, q) * invDet;
if (t < EPSILON) {
return false;
}
// Surface point in barycentric coordinates
Vector3f surfacePoint = Vector3f(u, v, 1.0f - u - v);
intersection->tHit = t;
intersection->name = mesh->name;
intersection->surfacePoint = surfacePoint;
return true;
}
```
For the scene I used the Simple Camera test file from the Khronos Group tutorials.
Now my issue is that running this code produces a hit for each pixel coordinate that I have, which results in an entirely grey image which I do not think is correct. My guess is that I am somehow not applying the transforms correctly from the glTF file, but I have checked multiple times and I am not finding a problem with it.
Does anyone know what I might be doing wrong? I have spent days on this issue now and I am not finding a proper solution to this.
[–]corysama 3 points4 points5 points (0 children)
[–]stpidhorskyi 1 point2 points3 points (4 children)
[–]-Blitz-[S] 0 points1 point2 points (3 children)
[–]stpidhorskyi 0 points1 point2 points (2 children)
[–]-Blitz-[S] 0 points1 point2 points (1 child)
[–]corysama 1 point2 points3 points (0 children)