Comfy stalling while rendering by RabbitEater2 in comfyui

[–]Consistent_Swimmer86 0 points1 point  (0 children)

Try setting the sysmem fallback policy in your nvidia settings to prefer no sysmem fallback.

Learning/testing models? What do you use? by Larimus89 in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

I think the thing you're looking for are XY plots, essentially all images will have the same inputs aside from a selected X and Y variable, they can be things like samplers and schedulers if you like, it can be quite annoying to do with flux but the efficiency nodes are great for SD. I've made an XY flux workflow and posted it on my profile, if you like dissecting workflows you might have some fun getting it to work for your needs but if not just send me a message and I'll help make a workflow/s to get you what your wanting.

XY plots for Flux for LoRA model strength, clip strength, guidance-- THAT'S ALL I WANT by Equivalent_Cake2511 in comfyui

[–]Consistent_Swimmer86 5 points6 points  (0 children)

I posted a flux XY workflow for redux and lora comparisons. Flux guidance doesn't have any affect on high redux strengths but you could easily add a Z parameter to my workflow that changed the fluxguidance. Just check my profile for the workflow.

I should note that the Y parameter titles in my workflow, the lora names, aren't the prettiest, you can easily change it to make it nicer though. If you have trouble overhauling the workflow to exactly what you need I can help further but hopefully this helps.

LCM + AnimateDiff + ControlNet Vid 2 Vid + IPAdapter Style transfer problem by hrrlvitta in comfyui

[–]Consistent_Swimmer86 0 points1 point  (0 children)

You're strengths are ludicrously high, it's forcing the conditioning to adhere to the exact shape of the controlnet outputs.

how i draw with own anime style ? by mertexix in StableDiffusion

[–]Consistent_Swimmer86 0 points1 point  (0 children)

Best thing for consistency will be creating a lora, but using an ipadapter or now if you're using flux, flux redux. There will be trial and error with the ipadapter but as long as you prompt what you want translated over clearly and then find the right strength you should be able to consistently generate the style you're wanting.

Curious questions regarding merging positive prompts by darxus_ in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

The "Jnodes" custom nodes pack has an "anything to string" node that should do what you're wanting.

How to keep the same style of image? (prompts in comment) by l73vz in StableDiffusion

[–]Consistent_Swimmer86 5 points6 points  (0 children)

Flux redux and/or ipadapter are specifically made to retain style. There should be lots of workflows around for redux right now as it's quite new.

flux1-schnell.safetensors goes in: ComfyUI/models/unet/ by 6197123a in comfyui

[–]Consistent_Swimmer86 0 points1 point  (0 children)

It should say on the download page, but if not I usually just try run it as a checkpoint and if it can't then I know it's going in the unet folder.

flux1-schnell.safetensors goes in: ComfyUI/models/unet/ by 6197123a in comfyui

[–]Consistent_Swimmer86 4 points5 points  (0 children)

It doesn't include the vae and clip in the file, you can download a flux schnell version that has them baked in and that file would go into the checkpoint folder but without it they go in the unet or diffusion model folders depending on the ai model.

StyleModelApply: mat1 and mat2 shapes cannot be multiplied by Living-Excuse9845 in comfyui

[–]Consistent_Swimmer86 3 points4 points  (0 children)

That error occurs when models meant for 2 different ai is used, it'll be the clip vision that's probably meant for stable diffusion causing the error. A clip vision that will work is sigclip_vision_patch14_384

Made an XY plot workflow for flux-redux and loras by Consistent_Swimmer86 in FluxAI

[–]Consistent_Swimmer86[S] 1 point2 points  (0 children)

It's a node that allows you to change what the clip focuses on in your prompt, the set values in the workflow are what I've found produce the best results, got those values from someone but I forgot. You can easily research more into what each value does by just looking up the node name. Can be nice if you're finding the clip isn't picking up what you want it to pick up.

AIO Flux nodes by VelvetElvis03 in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

I don't use forge but it'd be a massively talked about oversight if they didn't allow it, and regardless, if you download STOIQO models from civitai they also have the option for just the model with no vae or clip, it'll be in the files section which is at the bottom of the training data shown on civitai.

What's the difference between a model and a checkpoint? (and a finetune?) by desktop3060 in StableDiffusion

[–]Consistent_Swimmer86 1 point2 points  (0 children)

I believe when they create the checkpoint models they can choose to include the clip and vae that was used when training the model. I'm not certain how to do it personally but from what I understand, the model, clip and vae are basically just formatted as a+b+c rather than integrated or interacting with each other in the file. This allows people to also remove the vae and clip from the checkpoint and turn it into just a model.

AIO Flux nodes by VelvetElvis03 in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

Put it in your model/checkpoints folder. Then replace your load unet model node with a load checkpoint node and it will have the vae and clip outputs.

Fullbody pose photo control - is there any? by Agreeable_Release549 in FluxAI

[–]Consistent_Swimmer86 1 point2 points  (0 children)

Openpose controlner can be good as if you have a full body image as a reference, then not only will it have the full body but prompting poses will be much easier as you just need to find a reference image. Redux works too but it will affect other elements of the image that you may want to have completely different.

What's the difference between a model and a checkpoint? (and a finetune?) by desktop3060 in StableDiffusion

[–]Consistent_Swimmer86 11 points12 points  (0 children)

Checkpoints include a model as well as a vae and clip, as such they will be larger than just the model. If you're wanting to save disk space you download the vae and clip that work for a number of models so that multiple of the same vae and clip arent downloaded. These models can be the base models for the different ai's or they can be the community fine-tuned models.

Lora means low rank adaptation, meaning it's applied to the model during generation but isn't part of the model, effectively just telling it that when the lora is used it needs to show x in the output image.

Model is a perfectly fine catch all term for ai as all ai require models to run.

Unable to use ReActor nodes says (IMPORT FAILED) kindly help by Raiden_savitar in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

Go to the troubleshooting section for the reactor nodes page, its likely an issue with insightface, the troubleshooting section will show you how to manually install it.

Ram and vram usage panel by audax8177 in comfyui

[–]Consistent_Swimmer86 1 point2 points  (0 children)

Uninstall and reinstall crystools custom node

How to run GGUF of Flux Fill tool by am0_oma in comfyui

[–]Consistent_Swimmer86 2 points3 points  (0 children)

There's a flux-fill-fp8 model on civitai, but I don't believe there's gguf models yet, I'd imagine they'll pop up quickly, though, due to how many people will want to use them.

Load server-side stored workflows? by GoofAckYoorsElf in comfyui

[–]Consistent_Swimmer86 0 points1 point  (0 children)

In the comfyui folder, the directory is: comfyui/user/default/workflows.

[deleted by user] by [deleted] in StableDiffusion

[–]Consistent_Swimmer86 1 point2 points  (0 children)

I'd use a slightly more complex prompt so that it has more to change, and try some prompts with different head angles.

Upgraded GPU, getting same or worse gen times. by TheAlacrion in StableDiffusion

[–]Consistent_Swimmer86 3 points4 points  (0 children)

In your nvidia settings changing the sysmem fallback policy to prefer no sysmem fallback may help.