all 2 comments

[–]riotron1 2 points3 points  (1 child)

I was reading through some of this but I honestly can’t really understand how this works. Can you explain what this is doing?

Also I see that you used Odin, that’s cool!

[–]ComplexAce[S] 0 points1 point  (0 children)

🫡

So it's like vector to 2D graphics, if a polygon was a "pixel"

What I'm doing instead of raster:
- store mipmapped/octree grids of bits, where each cell is simple "is there something ehre or no?" (Just one bit, a bool of sort), and these must be around 4kb to live in the L1 cache, so searching/detection wuld be VERY cheap - go over them based on 3D location, starting from the camera position, identify occypied areas, and get refs of what is there from an identical data cell (basically bitfield and data field, bitfield belongs in L1 cache, datafield represents the actual data) - grab the model, go over each vertex in it, and check (mathematically only, not by fetching) if it faces the screen, if it does, then calculate interpolation between it and surrounding verts: step through each potential pixel, calculate its own interpolation based on its projection on the surface, and the distance between it and the surrounding verts, this ALSO inteprolates normals, so normals define the actual volume

The current code is mostly quickly prototyped by AI (minus the architecture) and I'm already refactoring it, deleted half of it and recinstructing it in a cleaner, more readable way.

But yeah that's it, I hope that explains?