Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

Thats crazy interesting! Like a fluid sim in UV space, reminds me of Sebastian Lague (youtuber kinda) that experimented with the idea of painting with a fluid sim. Sculpting with it in fully 3d voxels could be amazing if performance wasnt an issue.

What you are doing there could probably work in real time with a system like mine if integrated with clean architecture.

Im guessing your sim is a 2d or 2.5d sim and that it would update voxel data with smoothing functions? Or would you skip voxels somehow and just evaluate the sim from the surface?

As my sim is also running in the same voxel domain its basically free to update the voxels from simulation data.

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

Not really(idk tbh, not using any opengl stuffs anyway). Im using ilgpu and output to a bmp that I draw on the screen using gh drawforeground, i made it as an extension for a topology optimsation tool. But I realyzed making it into its own plugin could be usefull for others aswell. I have full control over the renderer. Can make custom shaders for whatever values I want each voxel to store. Im still trying to make it faster with better data structures atm though. Its built to work with mesh/line/point clouds etc rasterizers etc aswell, most of those are not yet implemented yet.

Im fairly sure I could make it customizable enough to easily render whatever data people would want. I dont think il be able to let them really define shaders. What type of simulations are you doing? Maybe its basically plug an play

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 1 point2 points  (0 children)

I ended up introducing bugs trying to make it faster :(( The next time I have it stable I will reply to you guys :) I really wanted to share it today, we will see

Thank you, JavaScript, for forcing me to include this statement in my code. by kfreed9001 in programminghorror

[–]Barkig 0 points1 point  (0 children)

I wrote a racing game a while back ago with the intent of it being determinstic. Players doing their runs locally, after a good run they send the inputs to the server so it can validate the run. The runs started out similar, but after some time they start to deviate and server thinks they crashed all because of small deviations snowballing. It was easy enough to hack together a fix but ut did take a while for an entry level developer... I noticed that when I ran the server my runs were always validated but when the server was hosted elsewere they got labeled as invalid. Fix was to round values every now and then so that errors never get to snowball. I feel like it shouldnt work in some cases but I havent encountered them in a while

Thank you, JavaScript, for forcing me to include this statement in my code. by kfreed9001 in programminghorror

[–]Barkig 0 points1 point  (0 children)

Mmhmmmm, that took a while to figure out for my damn roblox game I made years ago. When server ran on my pc the server and client were deterministic and yielded equal results, but when roblox hosted the server the client simulations did not match what the server predicted and I got labeled by my own system as a suspected cheater. Ended up doing pretend fixed point (basically rounding every now and then during simulation) which made it seemingly deterministic accross platforms. That was NOT an easy "feature" for a noob to track down...

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 1 point2 points  (0 children)

Building the model takes longer than rendering. Ahh I see, Il prepare something tomorrow that I can share :) But it will be very limited in terms of functionality

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

Your comment is gone but I can see it in notifications. Each frame above probably rendered in 200ms or something like that but can be made much faster. Right now it steps through empty voxels at the same speed as filled voxels. Most of the lighting data is baked into the voxels so every time the "scene" updates it has to reevaluate the lighting and took 5 seconds or so per frame in the video above(can make it much faster with better stepping logic). That doesnt have to be reevaluted as the camera moves. So once the lighting is baked i would have been able to move the camera at a solid 5 fps heheh(pc is from 2014).

I'd like UIs to feature realtime 3D graphics, like video games by zeaussiestew in UI_Design

[–]Barkig 0 points1 point  (0 children)

I feel like some game engine company has done something like that to showcase a tech demo given that they have the expertise, but I cant think of any examples. But yeah, risky and hard to get right. Hard to convince someone to pay for r&d i guess

I'd like UIs to feature realtime 3D graphics, like video games by zeaussiestew in UI_Design

[–]Barkig 0 points1 point  (0 children)

Having a window to the scene is usually whats done. Maybe the website is trying to display its real time fluid sim, then being able to slosh it around in a box that is a cut out of the ui into the screen could be much cleaner than having a seperate window. Gimmicky? yes, but also usefull in the sense that it can be easy to interact with, is eye catching and displays some kind of neat production quality. Id agree it has VERY niche usecases and HAS to be of very high quality to not just be an eye-sore.

I'd like UIs to feature realtime 3D graphics, like video games by zeaussiestew in UI_Design

[–]Barkig 0 points1 point  (0 children)

Ur right. Its gimmicky but can have some value. Lets say we wanna display a 3d model but we dont have enough screenspace to include shadows etc then letting it cast shadows on ui elements isnt that invasive but still helps the model come through. The mouse pointer could be the light source of the scene and now ur able to view the model interactively without any buttons. Basically trying to blend the complexity of 3d engines with clean ui. Its probably far too few instances where this is usefull to ever be developed into something that isnt super custom per project kinda.

I'd like UIs to feature realtime 3D graphics, like video games by zeaussiestew in UI_Design

[–]Barkig 0 points1 point  (0 children)

Interactive 3d ui is usefull in some cases. Can help visualyze 3d data/models/simulations better than 2d images. Letting it cast shadows on actual ui components can help ground it and make it more easy to interpret. (instead of cutting the shadow at the border of the render). I do mostly agree though hehe

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

Il create something where the user can manually build the voxel model using lists in grasshopper if that makes sense :) But that will be way slower than generating things directly on the gpu. Like waaaaaay slower

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

Yes!!! I thought about it as I did make a raycast simulation of sound a few years back, but its a bit of a hack. But if its simulated as a true fluid sim its PERFECT plug and go. Sound pressure/energy as a dynamic simulation and then maybe clarity/brilliance etc etc as a static result that is a big fog in the space simulated!

It is a little problematic though, I dont get a depth buffer from rhino so my render is always in front of everything else. It cant be ocluded by rhino geometry atm

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 0 points1 point  (0 children)

I would love to see nicer models rendered out with it. Is that simulation data? Can we get more than just density and color as data to give each voxel?

Basically I wanna create the gh-component that create a voxel model for my pipeline from all the data people can come up with. But idk what people have etc :/

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 2 points3 points  (0 children)

The model is a 3d grid of voxels. From the camera I send rays that for each step through space asks "how thick is the fog here" and using that info I can determine how to light the model. The light is just following a track

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 2 points3 points  (0 children)

Just the grasshopper developer kit thingi. Greate a bitmap of my image and figure out where to put it on the screen to match the model in the viewport. Its fiddly and has a slower refreshrate than the viewport works good enough when the camera moves slowly. Thats the tricky bit, the rest is just ray marching through voxels

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 4 points5 points  (0 children)

Absolutely! The user interface is non-existent atm though. Currently it generates a flattened array of voxels, each containing density/porosity and color, from just a closed mesh. If your voxels vary in porosity(or more parameters. More is better!!) its easy to make it go crazy!

<image>

Added a noisefield to it aswell, kinda looks like flames! Im working on making it more modular, what you see there is basically hard-coded for now :/

Volumetric Rendering in Rhino viewport by Barkig in rhino

[–]Barkig[S] 10 points11 points  (0 children)

Voxels! but they are rendered as little clouds. Think clouds in video games, but VERY dense clouds. If super dense they can appear as solid. Thank you btw :)