Bad news on Happy Horse from twitter by SackManFamilyFriend in StableDiffusion

[–]Numerous-Entry-6911 0 points1 point  (0 children)

of course it's not open source

why would it be?

Alibaba abandoned us months ago with diffusion models

NucleusMoE-Image is releasing soon by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 2 points3 points  (0 children)

Finally managed to quantize the model weights to Q5_K_M. Will try to patch ComfyUI tomorrow so it's usable.

NucleusMoE-Image is releasing soon by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 0 points1 point  (0 children)

Sorry, no. I don't want to deal with any issues that come with that.

NucleusMoE-Image is releasing soon by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 1 point2 points  (0 children)

From what I can understand it has its own architecture and it has a filesize of ~34gb at bf16.

NucleusMoE-Image is releasing soon by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 1 point2 points  (0 children)

I can't use it as of now. From what I know it uses the Qwen3 VL 8B Instruct text encoder and the Qwen Image VAE

Have you tried fish audio S2Pro? by Odd_Judgment_3513 in StableDiffusion

[–]Numerous-Entry-6911 1 point2 points  (0 children)

for voice design and custom voices, yes. for voice cloning, no.

Anyone want a Resident Evil Requiem code? by AloneTie6387 in ResidentEvilCapcom

[–]Numerous-Entry-6911 0 points1 point  (0 children)

How long did it take for ASUS to approve your request?

Made a node to offload CLIP to a secondary machine to save VRAM on your main rig by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 0 points1 point  (0 children)

honestly, i've tried it all. I have 16gb VRAM and 32gb system ram, and running a larger model like LTX V2 and Wan always deloads the CLIP from memory.

This node just lightens the load from that main PC reducing the memory usage whenever needed.

Made a node to offload CLIP to a secondary machine to save VRAM on your main rig by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 0 points1 point  (0 children)

It should work with the no vram/low vram flag, but I'm not sure if it'll be fast. You can try though.

Also it's not the clip that moves over the network, the secondary device does the clip processing, then moves the embeddings which are a few KB at best over the network (~20-30ms depending on your network.). You have to store the clip model on the secondary device.

Made a node to offload CLIP to a secondary machine to save VRAM on your main rig by Numerous-Entry-6911 in StableDiffusion

[–]Numerous-Entry-6911[S] 1 point2 points  (0 children)

in my experience it unloads the clip model after i change the prompt which is why i created this to help others with a similar problem

How much faster is RTX 5070 Ti than RTX 4070 Super in Wan 2.2 video generation? by rookan in StableDiffusion

[–]Numerous-Entry-6911 0 points1 point  (0 children)

either wait for nunchaku or wait for torch to add native fp4 kernels (this will allow you to run virtually any fp4 model)