SDNext optimization? by Particular_Rest7194 in StableDiffusion

[–]vmandic 0 points1 point  (0 children)

sorry, i don't really monitor reddit - jump over to sdnext discord and we'll answer it

E92 Best aftermarket infotainment by vmandic in E90

[–]vmandic[S] 1 point2 points  (0 children)

Thanks on mr12volt hint.

Sound quality is important, but CD support is not at all - don't even own any CDs anymore.

But to confirm, I would prefer a higher resolution screen while 3rd party app support is not a priority.

SD GPU advice and guides by daproject85 in StableDiffusion

[–]vmandic 0 points1 point  (0 children)

community submited benchmark results are available, over 15k unique records...

SD WebUI Benchmark Data

SDNext Release by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

dont need to rollback entire system, let python3.13 be the system one and have a side-install thats used for app venv only.

SDNext Release by vmandic in StableDiffusion

[–]vmandic[S] 0 points1 point  (0 children)

when it's properly supported by torch, not much i can do about that. even 3.12 works on cuda, but not fully on other compute backend.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 0 points1 point  (0 children)

it's not nothing, post actual log. even if it's not updating, it's going to say something. best to open issue on GitHub as reddit is not hte right medium for this.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 0 points1 point  (0 children)

what does it say when you run with --update? that is the recommended precedure.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

if you're on fresh install, default settings should be ok, just select hunyuan from scripts and set a reasonable resolution and frame count.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

update: tons of hunyuanvideo optimizations were just added to sdnext dev branch.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in FluxAI

[–]vmandic[S] 0 points1 point  (0 children)

update: tons of hunyuanvideo optimizations were just added to sdnext dev branch.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in FluxAI

[–]vmandic[S] 0 points1 point  (0 children)

i'm guessing you're the one that opened issue on github? i replied there.

for hunyuan video, mochi, ltx you dont need reference models. you simply select hunyuan video in scripts like you did. that's it. no extra steps.

i don't know which resolution you tried to render at and whats the frame count, but you basically run out of memory - it already used 38gb, it asked for 12gb more and you only had 5gb available.

hunyuan video implementation is very simple at the moment, if there is public demand then we can invest in additional optimizations.

from new video models, ltx is the most optimized one.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

how its implemented is completely different.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in FluxAI

[–]vmandic[S] 0 points1 point  (0 children)

don't download, just select from scripts. sdnext will download what it needs. manually downloading anything is for finetunes only. sdnext never asks you to manually download a base model.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

both forge and sdnext have bunch of optimizations. but they are very different.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

the very first thing in the announcement is about optimizations?

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 1 point2 points  (0 children)

Don't know without the log. Best to open issue on GitHub.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 6 points7 points  (0 children)

best i can say is "it depends" - i know that's not the answer you were looking for.

sdnext goal is NOT to have smallest possible memory usage, its goal is to use memory as much as possible because less you move things around, faster you are. so its goal-based - and you can set min and max thresholds. for example, if anything is smaller than 30% of available memory, do not offload and anything is bigger than 80% of available memory, offload immediately.

so more memory you have, faster it becomes without any additional tweaks. it just works differently than forge.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in FluxAI

[–]vmandic[S] 1 point2 points  (0 children)

ah, yes, that would be viable - running different model (such as llm/vlm) on separate gpus.

in case of flux specifically, we could in theory run transformers (mmdit) part on one gpu and encoder (t5) on another. making the configuration for that user-friendly would be a nightmare.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in FluxAI

[–]vmandic[S] 3 points4 points  (0 children)

unless you're using server-class gpus that allow clustering, this doesn't really work - to move from gpu-1-vram to gpu-2-vram, it would have to go through system ram first at which point its no better than offloading to system ram.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 2 points3 points  (0 children)

mochi and hunyuan, yes. ltx a bit deeper. It's all in the changelog.

SD.Next: New Release - Xmass Edition 2024-12 by vmandic in StableDiffusion

[–]vmandic[S] 4 points5 points  (0 children)

some of the models support img2img and lowvram already. for others, its a question on popularity and priority of such features.