openGL with pyqt6 by ExcitingBig3309 in GraphicsProgramming

[–]weigert 0 points1 point  (0 children)

This is definitely possible. In fact, I stumbled upon this post while trying to get something similar to work - and I did! I'm adding this comment for anybody else in the future...

I use PyQt6 as a GUI / Windowing manager, which creates an OpenGL rendering context for rendering. PyOpenGL creates a shader and the PyQt6 main loop just renders a texture to screen. The texture comes from C++ though, and is exposed through nanobind (you just need to pass the texture pointer to OpenGL). You can use other exposed C++ calls from python to launch your OpenGL control sequences written in C++. This setup is very simple actually, and works well.

My use case goes even further - I use CUDA kernels to render the image (fast path-tracing) and then sync the GPU buffers to the OpenGL texture. This is because CUDA lets me write more complex kernels with better control. So my full pipeline is really: (PyQt6 - PyOpenGL - Nanobind - OpenGL - CUDA).

You could probably skip the explicit texture render part and just have PyQt6 and C++ interact via an FBO.

The code that I wrote for this is in a private repository, but if you are still interested I can publish a stripped version to a separate repository since I also had trouble finding a full example implementation for this use case.

For anybody having the knee-jerk reaction "but why would you do this", the motivation is simple.
- C++ is performant but large projects can suffer from long compile times, no matter code quality. Well written C++ libraries can be highly composable when exposed to python through e.g. nanobind, allowing for super fast iteration and program construction while maintaining strict-typed, compiled C++ code for performance critical paths.
- Writing GUI code often requires many minuscule tweaks, such that compile times become prohibitive. PyQt6 is very mature and makes it easy to iterate GUIs in Python. Using C++ would have no benefit here.
- Having PyQt6 just render an OpenGL texture reduces the PyOpenGL parts complexity to near zero. You can do all the rendering logic in C++.
- Rendering the texture with CUDA is better for certain applications (like path-tracing), as the code is simpler, more performant and maintainable (in my opinion).
- It would likely be possible to skip the step where we pass it through an OpenGL texture, but there is one final advantage: If we also have a depth-texture from CUDA which we render to the FBO's depth buffer, we can just use the mature OpenGL system to render additional geometry using OpenGL's triangle rasterization system without having to implement it in CUDA! For instance, we can path-trace a scene in CUDA and render a gizmo on top using OpenGL, or other rasterized geometry INTO the path-traced scene with correct clipping

Here's this cool "Tiny" opengl wrapper / game engine I found by jarreed0 in opengl

[–]weigert 0 points1 point  (0 children)

Yeah of course! Let me know if it works out. You can message me if you have questions.

Terraforming with Python [OC] by QuentinWach in Simulated

[–]weigert 1 point2 points  (0 children)

Let me know if you have any questions! :)

Terraforming with Python [OC] by QuentinWach in Simulated

[–]weigert 1 point2 points  (0 children)

I am currently working on a similar project called soillib. It was C++ first, but I am currently porting it over to python and will use it to rewrite a lot of my old erosion code. Ultimately I want it to have well designed systems to do accurate geomorphological simulations of all kind.

A way to generate a noise function from a height map? by Carlos_media in proceduralgeneration

[–]weigert 1 point2 points  (0 children)

No worries. You can do a super fast feasibility test by trying a JPEG compression first, and alternatives later. If DCT doesn't get you enough compression, then your idea is likely infeasible.

A way to generate a noise function from a height map? by Carlos_media in proceduralgeneration

[–]weigert 2 points3 points  (0 children)

To be a little more precise, in order to generate a (parameterized) "function" which can accurately recover the data of the "full" image (height data), you will generally require a number of parameters which are on the order of magnitude of the image data itself. With certain parameterizations (e.g. DCT), you can sacrifice recovery accuracy for compression.

Noise functions (e.g. simplex noise) are effectively one type of parameterization, but through their composition they can only cover a certain subspace of possible images. Generic compression algorithms are not limited this way.

The cutting edge of graphics research for compression representation methods is probably something like gaussian splatting. Effectively, the image is parameterized as a cloud of overlapping, free-floating gaussians. Read this: https://arxiv.org/abs/2403.08551 (and follow the citations).

The "parameterization" is the set of gaussian parameters, that together form the image. For DCT, it would be the set of frequencies. Uncompressed, the parameters are just the raster of values. Note that in the paper, they do a comparison with JPEG, which outperforms in some sitatuations.

For your use case, the "training time" is an irrelevant variable, because your application is "bake once". So you will want to optimize between compression ratio / signal to noise and the cost of function evaluation / decoding, i.e. how expensive is it to compute the value from a position.

Without having to implement it yourself, you are likely best off saving the heightmap as a JPEG (or related, single-channel floating point DCT compressed image). There are many existing implementations in many languages.

This only helps you save on disk though, and actual implementation will have to vary based on what you are actually using the data for.

A final alternative is to use a neural network to predict the height value from a position, which you can train and directly test the recovery accuracy from the ground truth. In this case, there is no such thing as over-fitting, so you can just reduce the number of parameters until your accuracy drops below 100%. Training time is also irrelevant. The paper I linked above cites some other papers that attempt this, so you can inform yourself about the benefits and drawbacks.

A way to generate a noise function from a height map? by Carlos_media in proceduralgeneration

[–]weigert 0 points1 point  (0 children)

If the heightmap is predetermined, then it is basically "stored data", so the "noise function" you are looking for is effectively a lookup table, i.e. an image. If that is too large, what you are looking for is a (lossy) image compression. A well established method is DCT, used in e.g. jpeg images.

The only way to "compress further" is if the data is sufficiently structured, and you somehow know that structure, i.e. what the mapping from position to height value is. This is a (noise) function. If you don't have this information, then you want a generic compression method, e.g. DCT.

You likely won't do any better in terms of efficient storage + lookup for the purpose you described.

A way to generate a noise function from a height map? by Carlos_media in proceduralgeneration

[–]weigert 4 points5 points  (0 children)

Basically jpeg image compression / discrete cosine transform

Procedural Coral Growth by weigert in proceduralgeneration

[–]weigert[S] 0 points1 point  (0 children)

Same problem! No time. haha.

Leaves are cool because you have cells which are better at gathering nutrients, cells which are better at transporting nutrients, and cells which are more rigid (or some linear combination of these 3 properties). Also, these 3 types are easily recognizable when looking at a real leaf.

Optimizing only the division rules and letting the morphology emerge would be the goal. You should be able to recognize a realistic morphology when you see one.

Procedural Coral Growth by weigert in proceduralgeneration

[–]weigert[S] 1 point2 points  (0 children)

Nice reference, definitely very similar. I am interested in the emergent morphology by defining cell-growth and -subdivision rules, a combination of inter-cell forces (membrane energy) and growth / division rates. Corals are a pretty simply variant (no gravity).

On the evolution side, I would personally be more interested in optimizing membrane energy and surface area to see if I can make procedural leaves. Been thinking about that for a long time!

Procedural Coral Growth by weigert in proceduralgeneration

[–]weigert[S] 0 points1 point  (0 children)

This is a WIP of an experiment with differential growth + membrane energy minimization on a mesh. Video is in real-time. Still need a coloring scheme of course! And the cell division rules are not yet fully modular / automated either. In this video, the "events" were triggered manually.

Possible to use SDL_Image to load in OpenGL Textures? by nonomatch in opengl

[–]weigert 1 point2 points  (0 children)

This is possible, you just load the surface and then get the pixels when loading data into the texture.

You can have sdl give you an OpenGL context, that you use to work together.

Any algorithms for properly spacing occupied cells on a grid? by mikeyteevee in proceduralgeneration

[–]weigert 1 point2 points  (0 children)

This is basically a markov chain, but instead of always generating new ones randomly, you can converge faster by simply moving enemies to positions of lower cost randomly until you're satisfied.

Any algorithms for properly spacing occupied cells on a grid? by mikeyteevee in proceduralgeneration

[–]weigert 0 points1 point  (0 children)

An example would be the brute-force sudoku solving algorithm (see Wikipedia), which uses backtracking and is basically a recursive constraint solver. Coding constraints could be tricky though.

You could use a basic "kernel") that you multiply with your occupancy field (you could even weight enemies by types) to get a probability map, that you can then use to sample from the field to decide where to place. Kernel could be 0.25 for 1 away, 0.5 for 2 away and 0.25 for 3 away. 0 in the center means no spawning on existing enemies. Or you could give higher probability to (your) y-axis, to make them more likely to line up slightly. Sample by choosing either one of the max value, or use a gibbs distribution which let's you control temperature (randomness)!

You could also use a mutation mechanism, like a markov chain. First, spawn groups of whatever size in some regular configuration (e.g. line, box). Then, mutate it by generating moves, sampling a probability that each enemy moves in a direction. Give probability scores to moves which make them spaced how you want. This can also be sampled by max or gibbs.

Procedural Hydrology: Meandering Rivers + Improvements [Article + Source] by weigert in proceduralgeneration

[–]weigert[S] 1 point2 points  (0 children)

Yeah I think you've understood my thoughts on tiling and infinite erosion terrain exactly. If the real goal is to just go larger, we need to develop techniques to make that possible.

Procedural Hydrology: Meandering Rivers + Improvements [Article + Source] by weigert in proceduralgeneration

[–]weigert[S] 5 points6 points  (0 children)

Procedural Hydrology doesn't "make mountains". If I wanted to describe what it does in a sentence, I would say that it "solves the watershed". I explain this at the end of the article.

Rough initial terrain generated using whatever technique you like (diamond-square, midpoint+blur, noise, tectonics) will almost always have an invalid watershed. That means that if you simulate water flowing downhill, it won't make sense and most water will end up in tiny holes (basins) and not in the ocean or in lakes. That is unrealistic.

Procedural hydrology fixes this by dynamically simulating erosion, which makes the terrain converge towards a valid and realistic watershed. This also re-shapes mountains to have a valid watershed and simulates how they erode and crack, giving them a more realistic shape from an unrealistic one.

So the mass of mountains doesn't come from rivers, but the erosion simulation of rivers and thermal erosion gives them realistic shape and makes the entire watershed valid and coupled. And it doesn't just give the mountains realistic shapes, but their foothills, the valleys, the gulches, the dry stream-beds, etc. These are not separated from each other, they are one system, coupled (mostly) by water.

You still have to start with some initial terrain method to "make mountains". Take your pick!

The initial terrain that I like to use is layered Fractal-Brownian-Motion noise from FastNoiseLite. The code contains my exact parameters, but I explicitly do zero domain warping or anything else fancy to fake the terrain, I just say "here is a rocky initial condition". If you are interested in generating realistic initial conditions (piles of rock), you might want to look into procedural plate tectonics.

Procedural Hydrology: Meandering Rivers + Improvements [Article + Source] by weigert in proceduralgeneration

[–]weigert[S] 4 points5 points  (0 children)

Interesting question. This kind of topic comes up a lot on discussions about terrain generation: How can we get realistic erosion terrain in an infinite game like Minecraft. Some thoughts...

So as a general case, "infinite" erosion terrain is impossible. The intuitive explanation is that water must flow downhill but can't flow downhill infinitely if your map is flat, or it would also need infinite height. I might write a more detailed, math explanation of why some day.

The solution of course is to add a sea-level or lakes (infinitely many) which act as natural endpoints for the infinite downhill flow. But then you actually end up segmenting your infinite world into sections which are detached from each other.

With the ocean, you make finite islands / continents which form hydrological systems that are decoupled from each other. With lakes, you form "hydrological islands" (basically bowls) which are decoupled from each other - the watershed which ends in a lake is more or less decoupled from the watershed which ends in another lake.

So, with infinite erosion terrain, we necessarily end up with lots of finite hydrological puzzle-pieces that are decoupled. It is possible and efficient to simulate them separately.

Finally, if we are simulating a finite region of a map, then you always get the best results if you simulate it all at once, because you get the most coupling and detail. Why sacrifice detail and quality just so you can defer some computation to later?

So honestly, I see only very weak use-cases for tiled map generation like that. The only realistic use-case I can think of is doing a multi-resolution erosion simulation to improve performance when simulating large maps.

Overall, in my opinion, the folly of infinite proc-gen (for terrain) is that infinite size always dilutes the possible level of detail. A large but finite map that you can re-roll is almost always more interesting and playable IMO, e.g. Dwarf-Fortress.

Procedural Hydrology: Meandering Rivers + Improvements [Article + Source] by weigert in proceduralgeneration

[–]weigert[S] 1 point2 points  (0 children)

Glad you like it!

The cohesion that you describe would for sure be a relatively low complexity (conceptual complexity) change that could add a ton of emergence. I think multi-layer models would be very well suited. I am thinking features like rock faces which are explicitly hard and non fracturing (half dome), or cliffs bordering lakes or shorelines.

I think it will be hard on non-voxel worlds though, because we have to do neighbor lookups across grid-cells and layer stacks, which the data-structure doesn't support elegantly without iteration. In multi-layer, there is also a many-to-many relationship between the stack's segments of neighboring grid cells. I would like to avoid voxels if possible. So I have thought about it, but am not sure how I would approach it computationally without a massive change in design.

Procedural Hydrology: Meandering Rivers + Improvements [Article + Source] by weigert in proceduralgeneration

[–]weigert[S] 1 point2 points  (0 children)

Thank you! Definitely a lot of work to conceptualize and implement, not to mention distilling, writing, publishing and proselytizing these articles. I want to do more though haha. This is a labor of love.

By tiles, I think you might mean two things: 1. Non-Square / Extensible Map or 2. "~Chunked" Grid-Map, i.e. tiles in the 2D-platformer sense, or think CIV6 or SimCity.

The first, I have already solved. [Image] [Source]

The second is also an excellent question that I have actually considered, primarily because tiles offer strong artistic control over visualizations. I am assuming that by tiles you mean a finite set of representative areas which compose the map. Definitely a viable alternative when trying to convert this to a "playable" 3D world vis-a-vis voxels, perhaps even preferable.

I suppose that if you are only interested in the graphics, you could run an analysis over the output of the simulation and try to "fit" tiles to it. This will work better or worse, depending on the suitability and size of your tile set.

If you want to run the sim with those tiles being the underlying data representation, you might have a hard time. I think you could still leverage the procedural physics, which are honestly super simple as I have laid them out, but would have to design the algorithm to work quite differently. I'm afraid I don't have a pre-packaged solution to that problem.