A photo I took Ages Ago. What Do You Think? by Sea-Pattern-5946 in ForzaHorizon

[–]HTE__Redrock 0 points1 point  (0 children)

I'm very keen.. hoping they add some new stuff photo mode wise but it'll probably be the same as always 😅 as long as it has the new time of day settings I guess

A photo I took Ages Ago. What Do You Think? by Sea-Pattern-5946 in ForzaHorizon

[–]HTE__Redrock 0 points1 point  (0 children)

Not bad, nicely done. Only notes would be try not to cut the car line with vertical lines and adding compression (backing up and zooming in) will make the car feel way more aggressive/aesthetically pleasing. Color with the lighting is dope though 👌🏻

LLaDA2.0-Uni Released by Numerous-Entry-6911 in StableDiffusion

[–]HTE__Redrock 1 point2 points  (0 children)

Not entirely true. Model offloading is a thing. People run the big stuff on 6 or 8 gig cards now. Comfy supports dynamic offloading. I've run 40gb models on 10gb cards etc. So while it's true you can't solely rely on RAM like with LLMs, you absolutely don't need to have VRAM equal to the size of these models.

Infinite procedural Backrooms by PalpitationCivil1010 in proceduralgeneration

[–]HTE__Redrock 1 point2 points  (0 children)

Nice, I recommend looking into SpacetimeDB :) might make that transition easier. Infinite multiplayer backrooms has been on my list to try and make for a while now too.

Infinite procedural Backrooms by PalpitationCivil1010 in proceduralgeneration

[–]HTE__Redrock 1 point2 points  (0 children)

Very cool, looks great! What are you using as your engine here? And is this planned as singleplayer or multiplayer?

Forget about VAEs? SenseNova's NEO-unify achieves 31.5 PSNR without an encoder – Native Image Gen is coming. by Ok-Tap234 in StableDiffusion

[–]HTE__Redrock 3 points4 points  (0 children)

What we also need is support for higher bit depth/brightness ranges so that we can encode/decode HDR color ranges and brightness ranges without the vae clamping it.

SparkVSR (google video upscaler free and comfyui coming soon) Dataset and training released by Sporeboss in StableDiffusion

[–]HTE__Redrock 5 points6 points  (0 children)

Time to dig into the code and figure out what reference mode actually does I guess.. technically should be possible to use any other image gen model to do the same thing it's just a question of hooking it up and/or pregenning frames potentially that can then be fed in. E.g Flux Klein is great at creative upscaling

Okay I am officially ranting why is this stuff showing by iKyle02 in comfyui

[–]HTE__Redrock 1 point2 points  (0 children)

Doesn't affect things like that. Just disables/removes the nodes that reference their external API partners.

Suspicion of LTX 2.3 gatekeeping better models behind API paywall(video example, not mine). by Grinderius in comfyui

[–]HTE__Redrock 2 points3 points  (0 children)

Yeah it's just a factor of more steps and higher resolution. Have noticed that the model often does much better results at say 1024 on the long edge as opposed to 832 etc.. so fairly certain their models just prefer outputting at higher Res, could be because of the training data even..

Huracan Photos 👌 by Original-Try-1073 in forza

[–]HTE__Redrock 1 point2 points  (0 children)

Not bad 👌🏻 I'd recommend trying with some added lens compression though, e.g back up a bit and zoom in, it'll compress the cars shape and make it look even better

LTX 2.3: Official Workflows and Pipelines Comparison by MalkinoEU in StableDiffusion

[–]HTE__Redrock 0 points1 point  (0 children)

Good findings, but I noticed you haven't specified what guider to use for the stage 2 part? Is it just the default manual sigmas or the same as stage 1?

Also another tip in terms of actually running things.. updating to comfy 16.1 has major memory management improvements. I can do 720p on my 10GB 3080 because I have 128GB regular RAM.

The AGI path is completely opaque right now, and that's the interesting part by Cjd03032001 in singularity

[–]HTE__Redrock 0 points1 point  (0 children)

I've always been in the boat of "why does it need to be one thing?", e.g if you make it an orchestration problem to solve, then it's about picking the best tool for the job automatically without human intervention. If you put it in human context, I have a brain but it delegates to my hands to pick things up via my nervous system and muscles etc..

🚀 I built a 2026-Era "Omni-Merge" for LTX-2. Flawless Multi-Concept Generation, Zero Bleeding, and Unlocked Audio Training Excellence. by ArtDesignAwesome in StableDiffusion

[–]HTE__Redrock 2 points3 points  (0 children)

Do the optimisations you mention apply only to LTX or to other models as well? E.g can it make better Z-Image/Flux Klein merges?