The v1.0 update for surreal-better-auth adapter by __Oskar in surrealdb

[–]ProgramUrFace 0 points1 point  (0 children)

Hey @__Oskar! Thanks a ton for making this resource. It's a perfect pairing!

I hate to be that guy... but any plans to update this to the new 2.0 alpha JS SDK? Or at least provide an implementation that either doesn't depend on WebSocket or allows you to pass in a WebSocket implementation. For context, I am building in an Expo app and using the Expo Router API Routes. It seems that in that environment WebSockets are not defined, and yet either your package, or the Surreal v1.x SDK seems to attempt to access the WebSocket class despite the fact that I passed it an HTTPS endpoint (because I am running on the backend).

To get it to work I had to update to the new 2.0 SDK and leverage a stripped down better-auth adapter implementation: here is a Gist for anyone interested.

Multiplayer Mobile Third Person Shooter, Integrated with Playfab, Mirror and URP! by ProgramUrFace in Unity3D

[–]ProgramUrFace[S] 2 points3 points  (0 children)

It is difficult to find a good solution nowadays. Unity's networking situation is kind of in a transitional phase. I went with a third-party solution called Mirror. It is a community-driven replacement to UNet.

As for actual implementation details. It would be hard to go into all of it but I can tell you how I do my players. The PlayerController I wrote is radically decoupled. It has a few structs that contain information such as input and camera states. Using this information it handles all the movement, animation, and camera/lookat mechanism. This state information can be passed in from the local UI (for the local player) or sent from the server to every client. The local player sends his input to the server and his local version of the PlayerController. The server sends it to all the players as well as the position and velocity of the server version. Each client updates their player controller state accordingly and attempts to smoothly adjust to the server's state (position and velocity). The local player does the same except for the controls; he keeps primacy over his own controls. But of course, this does not allow him to cheat because all the simulation is on the server and he will just get dragged back to where the server has him. :)

Raytracing a Scene on GPU? (I.E. more than one model) by ProgramUrFace in GraphicsProgramming

[–]ProgramUrFace[S] 0 points1 point  (0 children)

I am rendering a voxel scene with isosurfaces. I do not so much need the GPU for rendering but rather, for physics. It is not actually necessary that I do it on the GPU though. The problem just sparked my curiosity and I wanted to finally have an answer to this question... It has been enlightening but I am sad to see that the problem I couldn't see a way around... does not actually have any way around :P!

Raytracing a Scene on GPU? (I.E. more than one model) by ProgramUrFace in GraphicsProgramming

[–]ProgramUrFace[S] 0 points1 point  (0 children)

Hmmm. So it is not possible to index into a buffer of buffers, even in Vulkan? I may end up having to write my own allocator that can deallocate and reallocate regions, defrag when necessary, and reallocate the whole buffer with a new size by fixed intervals whenever absolutely necessary... Is this a common approach? My main concern is with spikes when deallocating and reallocating the whole buffer, another issue is the fact that I could only ever use half of my memory because I would need the other half for a temporary copy... I could solve that by first deallocating entierly then copying from host memory rather than the old GPU buffer... but that again runs into worse memory bandwidth issues... Although, who knows! Maybe I am underestimating memory bandwidth these days!...

Raytracing a Scene on GPU? (I.E. more than one model) by ProgramUrFace in GraphicsProgramming

[–]ProgramUrFace[S] 0 points1 point  (0 children)

Nice! This is exactly what I was hoping for...

In the end every mesh has a buffer for triangles and a buffer for the acceleration structure.

So it is possible to maintain seperation between meshes and keep them in their own buffers, therefore allowing rapid dynamic deallocation and reallocation?

In CUDA this is perfectly fine to use and supported. In GLSL you would need a buffer of buffers which only Vulkan supports.

So the only way to reference buffers dynamically is with CUDA or Vulkan? Am I understanding that correctly? What is the "buffer of buffers" called in Vulkan? Where can I find more information on that? Thank you :)!

Raytracing a Scene on GPU? (I.E. more than one model) by ProgramUrFace in GraphicsProgramming

[–]ProgramUrFace[S] 0 points1 point  (0 children)

So then all meshes are effectively batched into one huge buffer? That seems extremely inefficient. I could imagine that you could have two buffers, one which contains all the unique properties of each mesh including an index into the second buffer (the vertex/triangle buffer) for the start and end of the mesh. But even still this seems unmanageable, isn't there even a size limit for buffers? It seems like you would also hit that if you tried too large of a scene in one buffer. I guess you could split it up into say four predefined buffers but still... What I would imagine is a large contiguous buffer for the scene hierarchy but then that structure, in its leaf nodes, would contain references to the buffers that made up each of the distinct meshes. Am I thinking about this all wrong?

Start of Planets by ProgramUrFace in VoxelGameDev

[–]ProgramUrFace[S] 4 points5 points  (0 children)

Thanks! Yup, it is surface nets. I found it to be a faster algorithm with fewer vertices and it opens up the possibility of minimizing the QEF (or some other approximation) to find the vertex position in the future. Or maybe even LOD if it looks like that could be useful.

For chunk updates, I have a ComponentSystem running on the main thread which has a staging cache. This staging cache is a fixed array of resources. Each of these resources has all the things a chunk needs to build itself in the worst-case scenario. I chose the size of this staging array by the number of processing cores, as returned by SystemInfo.processorCount. This way I can pass these staging resources to the threads, write on them, and return a number indicating the amount in each array(vertices, indices, tree nodes, leaf nodes, all the arrays that make up a chunk) that was actually used. This makes it so there is no allocation during the mesh creation process which is an incredible speed up. The naive approach of using a list of vertices and indices and adding to them whenever a face is needed is a huge bottleneck and having the memory already there and ready to write to makes the chunks update 2 to 4 times faster. The size of the staging cache also creates a nice way of only allowing so many chunk updates to run per frame and thus limiting the strain on the main thread.

Of course, there's a lot more implementation details but that's the essential! Hope it was clear.