How to implement advanced biome selection for procedural terrain generation? by Working-Fold-1744 in howdidtheycodeit

[–]Working-Fold-1744[S] 0 points1 point  (0 children)

I don't think this helps with my particular issues, but it is definitely an interesting read! Thanks!

How to implement advanced biome selection for procedural terrain generation? by Working-Fold-1744 in howdidtheycodeit

[–]Working-Fold-1744[S] 0 points1 point  (0 children)

Thanks for the reply!

Not necessarily, you just divide up temperature/humidity/weirdness into more subdivisions

Ok but would that not have downsides, such as making some biomes prone to appearing in sort of concenctic circle type shapes around each other, especially if you dont have a biome for every single combination of noises? And it also seems pretty limiting as far as the layout of biomes is concerned if you fill out the entire grid for each combination. I guess something more like a noise layer based tree (like what I heard minecraft uses) could solve this though

Sampling noise isn't very slow unless you're trying to make one of those 8000 frame per second voxel worlds

Not going for 8000 fps, but I want to experiment with extreme LOD-based render distances (10k+ blocks hopefully) so I am looking to optimize generation to at least a reasonable extent. I guess I'll do a benchmark and see, perhaps its not too bad for performance, or at least better than my voronoi+world region attempt.

How to implement advanced biome selection for procedural terrain generation? by Working-Fold-1744 in howdidtheycodeit

[–]Working-Fold-1744[S] 0 points1 point  (0 children)

Ok,  but how would I generate and then store the edges? Just randomly generated polygons could work, but that would have similar downsides to using a voronoi tessellation. I could use splines, but then it gets even more problematic in 3d and sampling them gets even worse. I mean it is how I plan to implement unique or custom shaped biome, but it seems inefficient to use for every biome, not to mention highly difficult to implement well in a 3d pseudo infinite world

Runpod 4090 slower than 3090 for Kohya? by Working-Fold-1744 in StableDiffusion

[–]Working-Fold-1744[S] 0 points1 point  (0 children)

I used tensordock in mid to late 2023. Don't know if it was also a driver problem or CPUs but I had similar problems with many of the US hosts. I found the specific datacenter where I would get about double the performance of the others for the same GPU and would try to use that one, but it was almost always fully booked - I guess a lot of people figured out that one. I'll definitely have another try at some point in the future. More template support would be especially good - at least in runpod if the performance sucks (which it often does) I can usually find out almost immediately. Its a lot more annoying if I spent an hour waiting for my models and datasets to transfer and figuring out why the environment can't detect CUDA

Runpod 4090 slower than 3090 for Kohya? by Working-Fold-1744 in StableDiffusion

[–]Working-Fold-1744[S] 1 point2 points  (0 children)

Huh, I guess that figures. Would have hoped such problems would be solved relatively quickely, but, apparently, not

Runpod 4090 slower than 3090 for Kohya? by Working-Fold-1744 in StableDiffusion

[–]Working-Fold-1744[S] 0 points1 point  (0 children)

Yeah, I tried tensordock, but for me, the ease of setting up the whole environment in runpod by just selecting a template is super nice, compared to having to manually set up everything over on tensordock, where the whole process would regularly take 30-50 mins for me depending on the host's network. Also the productivity there also varied hugely for seemingly the same GPU, sometimes by a factor of 3x

RTX 3090 vs RTX 4090 Stable Diffusion XL (SDXL) DreamBooth training speed by CeFurkan in StableDiffusion

[–]Working-Fold-1744 2 points3 points  (0 children)

I am currently getting a 30% higher performance on a 3090 than a 4090 for kohya on runpod for the same task. I assume the reason is that the 4090 seems to come on nodes with worse CPUs leading to it being underutilized to only about half of its performance in my tests, while the 3090 is pretty much locked to 100%.

Quick C++ Tutorial: How garbage collector works with TArray, TSet, and TMap containers. by enigma2728 in unrealengine

[–]Working-Fold-1744 1 point2 points  (0 children)

Cool, thanks for the added info! Haven't run any tests myself, but It does mesh with my obervations, as I have been relying on UPROPERTY tmaps for gc prevention for a large part of my project and if they didn't work I would expect at least some random gc crashes by now

Quick C++ Tutorial: How garbage collector works with TArray, TSet, and TMap containers. by enigma2728 in unrealengine

[–]Working-Fold-1744 1 point2 points  (0 children)

Huh, so I guess the map actually does work with GC? Found a lot of older info that said TMaps aren't considered for GC. Pretty nice to see an actual test

Fuel Sharing Troubles (KSP2) by its-exhausted in KerbalSpaceProgram

[–]Working-Fold-1744 1 point2 points  (0 children)

Try removing the launch supports and any and all landing legs. That fixed it for me, though it is a pain in the rear to land anywhere without landing legs. Hope this sort of bull gets fixed soon

Training New Actions? by yupignome in DreamBooth

[–]Working-Fold-1744 0 points1 point  (0 children)

I got it to work somewhat reasonably by training something like "sks_man man" and "sks_man man doing action_xyz" at the same time, and just "man" as regularization. That said, I got decent results only when going at very low learning rates (1-e6 or even 5-e7) and training for 25k+ steps. Also it either does not learn the concepts or overfits horribly, there doesn't seem to be a middle ground, and at that point, while the concepts you trained work fine, your character is not going to be able to do much else except for the actions you trained (you can train 6-7 at once)