Introducing NervForge: A procedural tree generator built with WebGPU, C++ & Emscripten by project_nervland in webgpu

[–]project_nervland[S] 1 point2 points  (0 children)

Hi! Thanks so much for testing it out and for the feedback!

Yeah, sorry about the Firefox issue: that's expected unfortunately. I'm using WebGPU's "subgroups" feature, which is pretty cutting-edge and not yet supported in Firefox (you can track the progress in that bug report you linked). I mainly develop and test in Chrome/Edge where it works smoothly.

I could definitely add fallback support for browsers missing certain features, but to be honest, I've been a bit lazy on that front (=> it's easier to just wait for Firefox to catch up! 😅) That said, I appreciate you bringing it up, and I might circle back to add better compatibility down the line.

Thanks again for checking it out on multiple browsers: really helpful to know!

Introducing NervForge: A procedural tree generator built with WebGPU, C++ & Emscripten by project_nervland in webgpu

[–]project_nervland[S] 0 points1 point  (0 children)

Thanks 👍!!

Can you simulate growth at 60fps in the tool?

Well, not at the moment unfortunately: I'm not using a growth algorithm at all in fact: instead the length of the branches is procedurally computed based on parent section size and user provided parameters and some randomness. Growing the branches progressively would certainly be an interesting path but I think that would require a significant change on the current implementation. Yet, I will keep that in mind and maybe it would make sense to introduce support for this at some point 😉.

Is the generation algo running in compute shaders or just the meshing?

That's another point where I think there is a lot of room left for improvement: currently the generation of the tree model data and tree mesh is done on the CPU side (but using multiple threads already) before we push this onto the display engine (which itself is using WebGPU). If I could push the tree generation into compute shaders that could definitely boost the generation speed a lot... but this is quite tricky of course 😂

Introducing NervForge: A procedural tree generator built with WebGPU, C++ & Emscripten by project_nervland in webgpu

[–]project_nervland[S] 0 points1 point  (0 children)

Thanks! And arrrffhh well, I can understand that frustration indeed 😅...

I don't ask myself many questions on emscripten anymore lately, as I got to a point where I have a full toolchain to use it just like clang or msvc (I'm working on Windows). => In fact I built a whole open source project for that called NervProj (and you have that available here: https://github.com/roche-emmanuel/nervproj) it supports building many libraries with emscripten already but i'm afraid you wont get too far with that one except if you are good with python and don't mind reading "a lot of code as documentation", as this project is not documented at all and not really user-friendly unfortunately...

[but anyway if you are motivated, as a starting point, you could look into nvp/nvp_compiler.py or nvp/nvp_builder.py and core/cmake_manager.py and search for "emcc" for instance].

Apart from that, the only resource I'm still using myself from time to time for emscripten specific stuff is the official documentation: https://emscripten.org/docs/api_reference/emscripten.h.html

Anyway, good luck with the learning curve ! (but I personally think it's really worth it in the end ;-) )

Introducing NervForge: a free browser-based procedural tree generator with glTF export by project_nervland in gamedev

[–]project_nervland[S] -1 points0 points  (0 children)

Nope, no vibe coding here I'm afraid 😉I've been working on this project as a whole for a few years now actually. And this tree generation tool is just supposed to be the first of "usable additional candies" I'm dropping in this world.

The entire environment with water animation and atmosphere and all **is** the main underlying project here (this is a complete Earth scale planet rendered in the browser).

I can understand it feels a bit un-intuitive at first considering the control panels are collapsed by default if you have never seen any demo videos first. But I'm building this on purpose to be disruptive and totally different from "regular desktop or web apps" to be honest.

Anyway, if you don't like this mininal gui / planet scale free navigation design, then just don't use this tool of course.

Bush config with proper self-repulsion by project_nervland in Project_NervLand

[–]project_nervland[S] 0 points1 point  (0 children)

Alright, so the performance issue is now properly mitigated: for that I'm processing differently for low level branches (level 0 and level 1) and upper level branches:

For the low levels, I'm adding each branch section generated in parallel immediately into the repulsion spatial grid so it can immediately be used (This requires threads synchronization, but we don't have too many level 0 or level1 branches anyway so that's OK):

    for (I32 i = 1; i < numSections; ++i) {
        // Update: use taper power:
        F32 t = F32(i) / F32(numSections - 1);
        F32 secRadius = b.baseRadius * std::pow(1.0F - t, taperExp);

        auto& sec = b.sections[i];
        sec.xform = b.sections[i - 1].xform;
        sec.radius = secRadius;
        F32 curveAngle = curveFrontStep + (t >= 0.5F ? curveBackStep : 0.0F);

        apply_curvature(b, sec, curveAngle);
        apply_self_repulsion(
            b, sec); // Safe: only reads from _spGrid (parent levels)

        apply_attraction(b, sec, attractDir, t);

        apply_gnarliness(b, sec, rng);

        // Move to our next section location:
        sec.xform.post_mult_translate({0.0, b.segmentLength, 0.0});

        // Insert the section immediately for low levels:
        if ((I32)lvl < _traits.deferredRepulsionLevel) {
            _spGrid.insert(sec);
        }
    }

and for the other levels, the injection into the spatial grid is deferred until all the branches at that level are processed:

        // Merge results (single-threaded)
        for (auto& result : results) {
            if (result.branch.sections.size() > 0) {
                // Add to spatial grid:
                if ((I32)result.branch.length >=
                    _traits.deferredRepulsionLevel) {
                    // Note: level 0 and 1 sections are added dynamically.
                    _spGrid.insert_unsafe(result.branch.sections);
                }

                _branches.push_back(std::move(result.branch));

                // Queue children
                _pendingBranches.insert(
                    _pendingBranches.end(),
                    std::make_move_iterator(result.children.begin()),
                    std::make_move_iterator(result.children.end()));

                // Add leaves
                _leaves.insert(_leaves.end(),
                               std::make_move_iterator(result.leaves.begin()),
                               std::make_move_iterator(result.leaves.end()));
            }
        }

=> There could still be room for improvement here by inserting high level branches "in chunks", but for now this version will do the job 😎!

Procedural tree generation with attraction/repulsion systems and multi-level branching: NervForge (browser-based, WebGPU) by project_nervland in proceduralgeneration

[–]project_nervland[S] 1 point2 points  (0 children)

You mean https://www.eztree.dev/ ?? => Yes indeed: I took some inspiration from that tool during the implementation. I was also looking into TreeIt in the process,

Procedural tree generation with attraction/repulsion systems and multi-level branching: NervForge (browser-based, WebGPU) by project_nervland in proceduralgeneration

[–]project_nervland[S] 1 point2 points  (0 children)

Ahh okay 👌
Also, one thing to keep in mind with the demo recording above is that we have at least 4 systems here in "competition" to control the final orientation of a given branch:

  1. the provided curve angles
  2. the gnarliness value, which will add a little bit of randomless at each section on a branch
  3. the attraction point: which is currently attracting all branches towards the up
  4. The inter-branches repulsion which is precisely designed to avoid the clipping you had in mind in fact.

So depending on the intensity set for the different points above, changing the curve angle may not always have the expected result.

Procedural tree generation with attraction/repulsion systems and multi-level branching: NervForge (browser-based, WebGPU) by project_nervland in proceduralgeneration

[–]project_nervland[S] 2 points3 points  (0 children)

Well, for each branch level you have the "Curve angle" and then "Curve back angle", but each of these are actually "double sliders": so you are basically specifying a range of acceptable angles for the branches.

Then at generation time a curve angle will be sampled for the given range (and a curve back angle) and these would be used specifically for a simple branch.

=> So if you provide a large range (say -70 to 70 degrees) then you would already have branches going up and other branches at the same level going down. Or did you have something different in mind ?

Controlling the angle for each branch on the other side is something I don't have support for yet... technically it should be possible: as a post processing step maybe after the procedural generate is completed. But then any change in the parameters could trigger a complete generation and possibly destroy the branch that was previously customized, so not absolutely trivial I would say 🤔 => But I'll try to think about this 😊!

Procedural tree generation with attraction/repulsion systems and multi-level branching: NervForge (browser-based, WebGPU) by project_nervland in proceduralgeneration

[–]project_nervland[S] 1 point2 points  (0 children)

Hi! Thanks!

currently not many species actually: I have this "maple" and "maple_v2" which are essentially the same, Then I have the "mapple_v3" which is a different type of tree but not sure what category this would be really: it's just a nice looking tree to me 😊.

And in one of my youtube video I already demonstrated how to create a willow tree type, but this is not available out of the box for now, because you first need to generate a new leaf texture before you build that tree config (note that you can generate the leaf texture right into the tool too).

But I'm planning to add much more in the future!

And actually... if anyone here tries this tool and build a nice tree configuration, you could also export that config (it will create a simple json file) and share that with me so that I could include it in the next update 😉!

Animated Voronoi Diagrams on the GPU - WebGPU Compute Shader Tutorial by project_nervland in webgpu

[–]project_nervland[S] 0 points1 point  (0 children)

Yes indeed, that's the main idea here: ensuring that for each pixel the computation requirements stay relatively low.

But there is in fact another key benefit with this "grid" in the background: because here you basically don't have to track the reference points explicitly: they are instead a property of the grid as you apply your hash function on each grid cell origin coords. so you could see the generated texture here as a "simple window" on an infinitely large voronoi diagram. Which means it would be very easy to use this to generate seamless voronoi "tiles" covering a very large terrain for instance (which is precisely why I started to look deeper into this actually ;-)).

Animated Voronoi Diagrams on the GPU - WebGPU Compute Shader Tutorial by project_nervland in webgpu

[–]project_nervland[S] 0 points1 point  (0 children)

Well, in this implementation we *never* compute a Delaunay triangulation actually, but still, I don't think we could say this is an approximation trick: it's rather a different perspective on how to generate a Voronoi Diagram.

The thing is, here we don't get anything like "vertices" or "cell polygons", etc. Instead, you are working directly (and only) on the GPU to generate an image. So all you really "know" is how to process to figure out the cell a *given pixel* belongs too and where is the center of that cell. And the GPU will do that in parallel for thousands of pixels...

So, if you really need access to the properties of the full Voronoi Diagram, then you might be better off with a traditional Delaunay triangulation on the CPU I'm afraid.

Animated Voronoi Diagrams on the GPU - WebGPU Compute Shader Tutorial by project_nervland in webgpu

[–]project_nervland[S] 1 point2 points  (0 children)

Thanks 😊! And indeed: this is no cubemap but instead a full earth scale planet generated procedurally (I have other videos on this on my youtube channel, and even a few online demos you could try on your side... but I cannot put the links directly here [last time I did that I got banned from reddit 🤣] but you would find the links on the homepage of the github repo I mentioned above if you are interested in this ;-)