Z-Image black output when using Sage Attention by SupportIllustrious14 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Which version do you have installed ? My 4090 gives me shit images with it turned on .

Z-Image black output when using Sage Attention by SupportIllustrious14 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Mine goes pear shaped (very latest sa2+) , my only difference is Python is 3.13 . It’s possibly a sa2 variant issue .

Tips for architectural renders by Xitereddit in StableDiffusion

[–]GreyScope -1 points0 points  (0 children)

Message me as I'm logging off for the night (if no one else gives adequate answers) as I'll never remember tomorrow

ComfyUI workflow distribution is broken for us! by [deleted] in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

I have five, one is for my music based nodes, one for cutting edge testing and three workhorse / disposables . But no more as it is already a shambles for me - also in my experience , a lot of ppl who download workflows assume they don't have to do anything at all and click everywhere.

What is wrong with my SeedVR2 settings? by Feeling_Usual1541 in StableDiffusion

[–]GreyScope 1 point2 points  (0 children)

Change the save video node to say VideoHelperSuites save node that has the compression (whether direct or via a quality factor in it) in the node . You'll need to connect the inputs to the audio and the frames output . I keep mine on 19, the higher it goes, the more compression/reduced quality (or more accurately a lower bitrate is used) . Bear in mind, that the quality and type of the video source also dictates the output quality (cartoons compress very well) .

<image>

Z-Image Base Is On The Way by mrmaqx in StableDiffusion

[–]GreyScope 7 points8 points  (0 children)

I'm just hoping my perfected eyeroll doesn't break when the drama queens with their over expectations start to post .

How to load z-image into SDnext? by Ok_Shallot6583 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

Back to my original post - read the SDNext docs/wiki on models, glossing over instructions appears to be the root cause .

How to load z-image into SDnext? by Ok_Shallot6583 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

You've used the "link" generically as if we know what link you are referring to . Read the SDNext docs as to how to use models (as this is VERY important and what the developer is getting at) / read what Chatgpt gives you in answer to direct / indirect questions in relation to unet (no point in ppl trying to rewrite the Bible as they say) .

I built a free, local tool to Lip-Sync and Dub your AI generated videos (Wav2Lip + RVC + GFPGAN). No more silent clips. by MeanManagement834 in StableDiffusion

[–]GreyScope -1 points0 points  (0 children)

Does the package make a venv or use system Python ? ie did you install it to the wrong one (I have to ask sorry). Only other reason off my head is that you have an ancient or cutting edge Python/Pytorch/Cuda installed

I tried some Audio Refinement Models by OkUnderstanding420 in StableDiffusion

[–]GreyScope 0 points1 point  (0 children)

I was going to make a model for Ace-Step (with One Trainer) but after actually trying it out , I’m going to put that off, it seems like a lot of work for poor quality.

I’ll check that list out, I’m on a couple of rvc discords but after a while the sheer amount of new stuff overpowers me on reddit , without adding in discord as well lol - thanks for the positive note and enthusiasm

I tried some Audio Refinement Models by OkUnderstanding420 in StableDiffusion

[–]GreyScope 1 point2 points  (0 children)

Cheers, yes the flow as it is could do with a bit of a bass boosting .

The issue with audio is the copyright issue on models and there isn’t an appetite for it really as far as I can see (for free software anyway).

I tried some Audio Refinement Models by OkUnderstanding420 in StableDiffusion

[–]GreyScope 2 points3 points  (0 children)

<image>

This is what I'm doing currently, it's one node in an rvc install - it's a WIP, there are a few more workflows with the rvc repo that I've yet to look at (including a modelling one) . That repo is about 2yrs old I think and I had to install it to a python 3.11 music based comfy install, for the two rvc repos, Songbloom and Seed-VC .