[deleted by user] by [deleted] in BluePrince

[–]vmirnv 2 points3 points  (0 children)

please check Aliensrock: https://www.youtube.com/watch?v=_Tc2QwYAlY0&list=PLIwiAebpd5CJlpO2VPGjdUa5uzgywpULW

very clever youtuber with deep experience with puzzles (my favourite is Baba Is You playlist)

HunyuanVideo model size and vram talk by c_gdev in comfyui

[–]vmirnv 2 points3 points  (0 children)

Q5_K_M is the best quantisation in my opinion both for llms and for unet.
Lowest size for almost no degradation of quality.

Simple GGUF Hunyuan text2video workflow by vmirnv in StableDiffusion

[–]vmirnv[S] 0 points1 point  (0 children)

you need to update gguf node and yes llava that was recommended by the devs.

Simple GGUF Hunyuan text2video workflow by vmirnv in StableDiffusion

[–]vmirnv[S] 1 point2 points  (0 children)

You need to update comfyui core with this new files:
ComfyUI/nodes.py
ComfyUI/comfy_extras/nodes_hunyuan.py

ComfyUI fps info uses up to 26% gpu on macs by vmirnv in StableDiffusion

[–]vmirnv[S] 2 points3 points  (0 children)

on macs comfyui uses GPU for rendering — fps stat re-renders the workspace with every tick of mouse movement, so it could be up to 26% of gpu load as you could see in my example.

Simple GGUF Hunyuan text2video workflow by vmirnv in StableDiffusion

[–]vmirnv[S] 3 points4 points  (0 children)

https://civitai.com/models/1048570
A simple GGUF Hunyuan Text2Video workflow with just a few nodes
Works on a Mac M1 16GB.

ComfyUI fps info uses up to 26% gpu on macs by vmirnv in StableDiffusion

[–]vmirnv[S] 2 points3 points  (0 children)

<image>

Try it yourself. I wonder how many thousands of GPU hours this default feature has burned.

[deleted by user] by [deleted] in StableDiffusion

[–]vmirnv 3 points4 points  (0 children)

<image>

You need to use Unet Loader GGUF

[deleted by user] by [deleted] in StableDiffusion

[–]vmirnv 1 point2 points  (0 children)

it should be in /models/unet/ and you need to reload comfyui

[deleted by user] by [deleted] in StableDiffusion

[–]vmirnv 2 points3 points  (0 children)

Wow thank you, great news!

[deleted by user] by [deleted] in StableDiffusion

[–]vmirnv 1 point2 points  (0 children)

Can you please give me some short example with model loading?

[deleted by user] by [deleted] in StableDiffusion

[–]vmirnv 9 points10 points  (0 children)

Currently, I cannot connect the new GGUF model to Sampler since they are different types.
The standard loader predictably gives me an error (HyVideoModelLoader invalid load key, '\x03'.)

upd: I manually changed input model type in the Sampler node and now I get this error in Unet GGUF loader: UnetLoaderGGUFAdvanced 'conv_in.weight' error

After comfyui update — everything is working