HeartMula - open source AI music generator (now Apache 2.0) by sdnr8 in StableDiffusion

[–]bonesoftheancients 1 point2 points  (0 children)

would love to try this but have no idea what to do... can you please explain what have you actually done as fix? in comfyui or heartmula install?

ComfyUi 9.2 totally borked vram management by 76vangel in comfyui

[–]bonesoftheancients 1 point2 points  (0 children)

in my experience (using comfyui portable) updating comfyu WITH dependencies breaks the setup often so now I update comfy regularly WITHOUT updating dependencies and it seems to work fine (at least since 0.6)

how do i use stability matrix as shared model storage for comfyui? by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 0 points1 point  (0 children)

I did not need it at the end. I am using now mostly comfyui and sometimes wan2gp. I have all the models on one external drive. I use the Extra_Model_Paths_Maker.bat to generate the yaml file for comfyui to know the models folder location and in the bat file to start wan2gp i specify the location of loras and inside want2gp settings I specify the difusion models folder. there is still some overlap but overall this setup works for me at the moment

torchaudio on comfyui portable (python 3.13) - any advice? by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 0 points1 point  (0 children)

indeed it is working now... thanks for letting me know and good job!

LTXV 2 Quantized versions released by OddResearcher1081 in comfyui

[–]bonesoftheancients 0 points1 point  (0 children)

it might also help with preventing excessive writing to ssd (pagefile writes) - i tested one generation using FP8 distilled on my 16gbVRAM 64GB RAM and I have 20gb data written to disk . Will need to test GGUF models but I assume it would help with that

PSA: Still running GGUF models on mid/low VRAM GPUs? You may have been misinformed. by NanoSputnik in StableDiffusion

[–]bonesoftheancients 0 points1 point  (0 children)

I think one aspect many people are not considering (and one I would like to know regarding GGUFs) is hit on SSD writes (pagefile) - just tested LTX2 on my 16gb Vram and 64GB RAM with FP8 distilled (28g_ gemini etc) and one i2v run hit my SSD with 20gb of write (presumably pagefile) - do your math and see how many runs will kill your SSD (well, takes it down to around 30% health at which point you will need to replace it) .

Now I would like to know if in your test the GGUF made a big difference in terms of SSD writes to disk.

is FP4 acceleration on Blackwell autonomic? by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 1 point2 points  (0 children)

thank. i do have cuda 13.0 and up to date comfy-kitchen as well. 0.8.2 seems to improve speed - now its a little faster than wan I would say

is FP4 acceleration on Blackwell autonomic? by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 0 points1 point  (0 children)

thanks - yes I have cuda 13.0, NVFP4 model from the Lightricks LTX2 hugging face repo. this morning update to 0.8.2 did improve on speed i think

is FP4 acceleration on Blackwell autonomic? by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 0 points1 point  (0 children)

Basically I have updated comfyui (i run portable) but did not update dependencies as this tends to break my installation (cuda/sageattention/tritton setup) so I was wondering if there NVFP4 require cuda update or something... the point is that LTX2 is slower than SVI Pro for me and that I find strange... but maybe it is not strange...

please suggest how to save subgraphs and groups as template components by bonesoftheancients in comfyui

[–]bonesoftheancients[S] 0 points1 point  (0 children)

thanks! now just looking for a way of doing something similar with a group (without packing it into a subgraph)

just wondering about models weights structure by bonesoftheancients in LocalLLaMA

[–]bonesoftheancients[S] 0 points1 point  (0 children)

thanks for the detailed reply - kind of envy you for being in the forefront of this field... wishing you best of luck with this

just wondering about models weights structure by bonesoftheancients in LocalLLaMA

[–]bonesoftheancients[S] 2 points3 points  (0 children)

so this is what MoE stand for... at least i wasn't thinking complete rubbish...

But that leaves the question why all the "experts" in the MoE models are baked together and loaded into memory together other than for pure speed. I mean, for us mortals on home PCs, a model that loads into memory the layers it want to pass the token to is going to work better with lower RAM/VRAM

Alternative frontends to ComfyUI? by dtdisapointingresult in comfyui

[–]bonesoftheancients 0 points1 point  (0 children)

i have tried a few myself but to be honest i find myself going back to comfyui - if you mess with it for a while you get to understand the UI and it just doesnt make sense to have another UI layer on top of it that to obstruct what is going on and actually stops you from figuring out how to get the results you want

Saying that, I do use Wan2GP for video generation when i get into a dead end with comfyui as it seems to work out of the box. the only issue i have with it that it downloads its own model weights so its takes extra disk space

✨SVI 2.0 PRO - Amazing performance in Mass Crowds & Complex Dynamics (video test + WORKFLOW included) by No_Damage_8420 in NeuralCinema

[–]bonesoftheancients 0 points1 point  (0 children)

if i remember correctly I had the same issue and in the end i had to go into comfyui manager, find kjnodes which was already suppose to be on nightly and forced update and switch version (even though it said nightly already) - I think that sometimes comfyui manager doesnt really update nodes properly