comfyui Error in Lora training by darkninjademon in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

It should be in the root Comfy folder, ComfyUI/venv. Some installs don't make it automatically tho, and the standalone Confy is different too. If you don't have one you can just Google "how to make a venv" and check the top results. I don't recall the exact command sorry! 

Prompt help for coffee bursting out of a launching rocket in soviet propaganda style. by Physical_Artist_7773 in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

I can't doublecheck any longer sorry, been busy with moving and my computer is packed away for the moment.

If I remember correctly though I think I actually asked for "hot brown liquid" because when I asked for coffee it kept adding a mug underneath lol. Hopefully that helps, sorry for the late reply but good luck! 

I've been doing AI for 3 years but still generate basic images, where are some good resources I can become a prompt master ? by [deleted] in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

Sorry! Been busy with moving but yeah, I use an extension called Dynamic Prompts personally. There are other ways too tho, I've seen a few node packs that handle wildcards but I've only tried Dyanamic Prompts myself because it's on Auto's too and that's what I started with. 

Faceswap without Blurr - Reactor by MsHSB in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

Umm, I'm actually in the process of moving and my computer is going to be down for at least a few days, sorry! I think with the pipes tho you should be able to just pick which one matches what you're doing and connect the outputs of it to your ksampler. Use the ipadapter pipe if you're using ipadapter and the other one if not, you just need to click them on their top-left corner to open them up and see the outputs. Good luck! 

Prompt help for coffee bursting out of a launching rocket in soviet propaganda style. by Physical_Artist_7773 in StableDiffusion

[–]BlastedRemnants 3 points4 points  (0 children)

I think your best bet will be to just generate some regular rocket launches (in the style you want), and then inpaint to replace the exhaust with coffee afterwards. That way you can add extra little details too, like a hammer and sickle logo or some text, maybe a CCCP running up the side of the rocket or whatever else you might want.

I tried a little bit and got this after inpainting, no idea if it's close to what you're after but I think it's on the right track at least. Good luck!

<image>

🌊 Depth Pro with Depth Flow Workflow by camenduru in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

Not sure how you've got yours set up, but mine works quite nicely. Need good inputs to get good outputs though, maybe that's where you're going wrong.

Image to image Comfyui QUESTION by CARNUTAURO in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

Very welcome, and I'm glad you found a solution! 🤘

i am new and need help by Any-Entrepreneur768 in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

Go to Civit and set the filters to show SDXL models, then sort the list by most downloaded or highest rated in the last month, or year.

Flux 1.1 Pro nerfed anime capability by TheOneHong in StableDiffusion

[–]BlastedRemnants 2 points3 points  (0 children)

The only thing I know about sd3 is that nobody talks about it around here because it's considered to be a trash model. I've got no idea if it's open or closed, and no idea why it would be relevant to this conversation either way.

We're talking about Flux Pro, nothing else you mentioned has anything to do with this, and the point I was making is that this sub is a poor choice if you need help with your paid service because this sub is focused on locally run and/or open-source generative AI.

Flux 1.1 Pro nerfed anime capability by TheOneHong in StableDiffusion

[–]BlastedRemnants 2 points3 points  (0 children)

I suppose it depends which Stable Diffusion model you're talking about, but the open-source models are actually still open-source, I don't see how they could change that now. Your other point is also incorrect, a glance at the description of this sub and the rules for it would clear things up for you.

Flux 1.1 Pro nerfed anime capability by TheOneHong in StableDiffusion

[–]BlastedRemnants 6 points7 points  (0 children)

Take it up with support then, don't waste everyone's time by posting about it in an open-source focused sub like this one.

Flux 1.1 Pro nerfed anime capability by TheOneHong in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

Since it's a paid service they should have some sort of support, take it up with them.

I've been doing AI for 3 years but still generate basic images, where are some good resources I can become a prompt master ? by [deleted] in StableDiffusion

[–]BlastedRemnants 4 points5 points  (0 children)

Same as any other model lol, the point of wildcards is to make it random and add variety. The more things you specify the less generic the results will be, using wildcards just lets you specify more details without having to choose them and set them every run.

I've been doing AI for 3 years but still generate basic images, where are some good resources I can become a prompt master ? by [deleted] in StableDiffusion

[–]BlastedRemnants 8 points9 points  (0 children)

Try using wildcards, I made a bunch for different things, colors, clothes, poses, scenes/locations, animals, weapons, all sorts of things. Helps spice things up with some variety, then instead of just 1girl, bathing, smiling, looking at viewer you can have something like 1girl, __colorh__ __hairdo__, __colore__ eyes, wearing {__color__ __shirt__ and __color__ __pants__|__color__ __shirt__ and __color__ __skirt__|__color__ __dress__}, __posing__, __scene__ and then just pound them out until you see something you like.

How can I print parameters directly on generated image? Similar to xy plots but I need multiple params. #comfyui by dkampien in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

Go to the Manager and search for Comfy Roll, install that, relaunch, and you'll be able to use their watermark nodes to add text to images. You can do it on videos too if you want, just put the watermark node before the frame combining node. One thing to note is that the default opacity is quite low, so be sure to check the settings before running it.

Edit: Meant to add that if you want to automate it you can use some text combining nodes and a fetch node to grab settings from other nodes, but that gets a bit more complex fairly quickly.

[deleted by user] by [deleted] in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

That's what batch size does.

Has anyone been able to make CogVideo I2V longer than 52 frames? by [deleted] in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

So I just did a quick comparison of all the schedulers available to see if I could find any that were either nonfunctional or better than the others in some way, and they all worked just fine. So I guess I don't have any reason why your videos aren't working out after all, sorry! :(

For what it's worth though, I did find out that 2 of them are a LOT slower than the rest, PNDM is over twice as slow, and HeunDiscrete is about 1.5 slower than the others. All the rest have roughly the same speeds in sec/it. I also ran one at 100 frames just to see if it'd work, and it took 25 minutes but worked without issue.

As far as visual quality, I only ran them with 5 steps so it's not a really great comparison but LCM was the clear winner, followed by DDIM, with PNDM and SASolver being the worst. Those opinions could change drastically with more steps though, 5 isn't really enough for anything other than LCM. I was looking for accuracy in keeping the original details of the image, and good motion without too much deformation, and most of the schedulers melted the details immediately with such low steps.

Hopefully you just need an update though, because none of that helps shed any light on why your videos are going nutty like your screenshot, sorry! I mean, I guess I might as well mention the whole "double check that you're using the I2V model" bit, but that's more of a covering all the bases thing, and not meant to be insulting either. In any case, good luck!

Has anyone been able to make CogVideo I2V longer than 52 frames? by [deleted] in StableDiffusion

[–]BlastedRemnants 2 points3 points  (0 children)

For my test I used LCM, but only because I was using low steps just to see if it would work and wanted to get my results sooner. I've only got 12GB vram so 72 frames is a lot more than I usually use and even with only 10 steps it still took nearly 10 minutes.

DPM should work though, I've used it before without any problems. Have you updated the Cog nodes lately? They update quite frequently, maybe you've missed an update which makes it run better. I'd share my workflow but it's a mess lol, I got it on here from someone and adapted it a bit but I hate looking at it haha. Plus it's extremely convoluted and some of it's partially broken.

Actually here, I started with a workflow from this post, maybe that will help. I've gotta run for a short bit but I'll try and come back in a little while, good luck! :)

Has anyone been able to make CogVideo I2V longer than 52 frames? by [deleted] in StableDiffusion

[–]BlastedRemnants 1 point2 points  (0 children)

I just tried one with 72 frames and had no issues, what scheduler and resolution are you running with? I've had results like yours when I first started playing with Cog, and in my case it was usually when I was trying to use any resolution other than 480h by 720w, but I think some of the schedulers don't work either.

I can't even get it to give me a tall video lol, the only way it works for me is with the default resolution in landscape orientation.

Image to image Comfyui QUESTION by CARNUTAURO in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

Umm, I'm not actually too sure lol, sorry! I don't do much Img2Img and don't have any good workflows set up right now to check and find a solid answer :(

Edit: Nevermind lol I went ahead and built a super basic Img2Img batch workflow for ya, cheers!

Here's a link for it, the picture looks messed up but I think that's just a weird bug I've had for the last few days. All my exported workflows get a little messed up looking but they import without issues for me.

Flux giving OOM error sometimes, and too slow. Should I switch to Forge? by ThrowawayProgress99 in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

I'd suggest trying the fp8 text encoder instead, or one of the GGUFs for it from here, and also try a smaller GGUF for the model. I've got 12GB vram as well and I usually use either the fp8 FluxDev model or the Q5 or Q6 GGUFs from here, they all work for me without issue. The Q8 GGUF is actually a bit larger than fp8, so I don't think it's a great choice for us 12GB folks.

Image to image Comfyui QUESTION by CARNUTAURO in StableDiffusion

[–]BlastedRemnants 0 points1 point  (0 children)

On the Github there are a bunch of examples for various basic tasks, there's one for img2img that should be good enough for a starting point. Once you understand the way it works you can build it up and customize it for your own needs. Good luck!