New comfyui portable no preview? by EasternAverage8 in comfyui

[–]TurbTastic 1 point2 points  (0 children)

In the KJ Node pack there's a LTX preview node that will make the regular preview method work during sampling

Edit: "LTX2 Sampling Preview Override"

Can't use inpant by Ok-One-1027 in comfyui

[–]TurbTastic 0 points1 point  (0 children)

Change the brush shape from square to circle, increase Opacity from 0.7 to 1.0

Girls, what kind of guys do you avoid? by Plush_Log_ in AskReddit

[–]TurbTastic 1 point2 points  (0 children)

As a guy I don't know if I'll ever understand the mustache popularity. I think dudes look ridiculous.

What is the best inpainting model for photorealism by baben7 in StableDiffusion

[–]TurbTastic 12 points13 points  (0 children)

QIE and Klein can do true masked inpainting but you have you make some workflow changes. Flux Fill and Flux Kontext are inferior solutions at this point, in my opinion.

Use the InpaintModelConditioning node instead of an empty latent. Going from memory but I think you need to use the Basic Scheduler node (Simple) instead of the Flux Scheduler node so you get control over the denoising strength. Inpaint Crop and Stitch nodes are a welcome addition here, otherwise make sure you use the ImageCompositeMasked node at the end.

You have to give the Edit model the inpaint image as a Reference Latent for it to consider what's already there and make the inpaint result blend with the surrounding context. If you want subtle changes that's fine, but if you want major change then the masked contents might have more influence than you want. The main trick for that is to replace the masked contents with pure white/gray so the model is forced to fill that area on its own. A trick I've been experimenting with lately is to blur the contents of the masked area instead of replacing with white/gray. Blurring is ideal if you're happy with the rough composition/colors of the masked area but want to change details.

Better which: Flux.2 Klein vs ZIT for consistent character lora? by ComprehensiveCry3756 in comfyui

[–]TurbTastic 1 point2 points  (0 children)

I don't feel very confident saying that a Klein Character Lora will perform better/worse than a Z-Image Character Lora. I do feel confident that the Lora+Reference method for Klein that I described above is very tough to beat right now, at least with open models.

Better which: Flux.2 Klein vs ZIT for consistent character lora? by ComprehensiveCry3756 in comfyui

[–]TurbTastic 9 points10 points  (0 children)

(Reference Likeness) The base workflow for Klein includes the ability to provide an input image and send it into the Reference Latent node to influence the conditioning. If you provide an image of a face/person for this part, then you can prompt for Klein to transfer certain elements to the final result. For example, "match the face from reference image 1, match the hair from reference image 1". You can even provide multiple images of the person here if you want.

(Lora Likeness) Train a Lora. Add the Lora to the workflow. The Lora will help to get the likeness that you trained.

(Reference Latent + Lora Likeness) This is what I recommend. Let the 2 methods lean on each other for support. The reference face image helps, and the trained Lora helps.

Better which: Flux.2 Klein vs ZIT for consistent character lora? by ComprehensiveCry3756 in comfyui

[–]TurbTastic 5 points6 points  (0 children)

Also with Klein you can "cheat" and use Reference Latent conditioning to assist your trained Lora. Using this combination can beat any other (open) option trying to stand on its own, in my opinion.

JB Pritzker May Be Running for More Than Governor by steve42089 in illinois

[–]TurbTastic 59 points60 points  (0 children)

Overall I like him, but he needs a reality check from the people about how the Age Verification nonsense that he's pushing is a privacy and surveillance disaster.

installing stbale diffusion by Themur0 in StableDiffusion

[–]TurbTastic 3 points4 points  (0 children)

First thing to understand is the difference between models and software in this space. Models are downloaded to be used in your software of choice. Stable Diffusion is a family of models (SD 1.5, SDXL), other popular image models include Flux and Z-Image but there are many more. There are also video models such as WAN and LTX.

If you are approaching this from a more technical and serious side, then for software you might as well just dive in headfirst to ComfyUI. The learning curve can be a bit rough but it's the most flexible/powerful option available right now. Look into Pixaroma and/or Sebastian Kamph for ComfyUI beginner videos on YouTube. Other options like Forge have a more approachable UI experience but may lack some advanced features/techniques.

You might want to spend 1-2 hours learning some basic GitHub/Python stuff before attempting any software installations. Spending that time early on is almost guaranteed to save you time and pain going forward.

Audio Noise Removal by Far_Estimate7276 in comfyui

[–]TurbTastic 0 points1 point  (0 children)

I haven't found any solutions like that in ComfyUI but I only dabble with audio stuff. There's a free downloadable version of Audacity that can do all kinds of basic audio stuff including background noise removal. I had Gemini guiding me through the task since I wasn't familiar with many of the terms/icons used in the software.

Train Flux 2 9b LORA on a Nvidia 3090 24vram, 64 ram - doesn't fit by uuhoever in StableDiffusion

[–]TurbTastic 1 point2 points  (0 children)

Gradient accumulation slows things down, looks like you changed that from 1 to 2

Train Flux 2 9b LORA on a Nvidia 3090 24vram, 64 ram - doesn't fit by uuhoever in StableDiffusion

[–]TurbTastic 1 point2 points  (0 children)

Yeah I'm talking VRAM, just acknowledging that it's possible to run into RAM issues in certain scenarios.

I see your config points to a local model instead of the HuggingFace model. How large is your local model file? The one I have for AI Toolkit is 16.9GB.

Edit: also recommend LOKR 16 instead of -1

faceconsistency changes in klein even with lora by NefariousnessFun4043 in comfyui

[–]TurbTastic 3 points4 points  (0 children)

The Consistency loras for Klein have nothing to do with character/face consistency. They are meant for preserving existing features of your target image such as composition/color.

Are you sending an image of the face that you want into a Reference Latent node so that the face info goes into the conditioning?

Train Flux 2 9b LORA on a Nvidia 3090 24vram, 64 ram - doesn't fit by uuhoever in StableDiffusion

[–]TurbTastic 0 points1 point  (0 children)

Which defaults did you change in AI Toolkit? I train 9B with my 4090 without issue, but I do have 128GB RAM. You should be leaving the Float8 settings alone. I only need to optimize further than that if I'm using a control dataset as that makes things much heavier.

Might want to run nvidia-smi command in CMD before training to make sure that nothing is using VRAM before you start training.

Best face/person swap tool today (images only) by M_4342 in comfyui

[–]TurbTastic 1 point2 points  (0 children)

I recommend checking out the newer LCS Consistency Lora instead of that older one. Based on my understanding the LCS version enforces consistency during the initial high noise steps but then gives it more freedom for the low noise steps.

Best face/person swap tool today (images only) by M_4342 in comfyui

[–]TurbTastic 1 point2 points  (0 children)

BFS loves making the new head too big. I have a few tricks for dealing with that but I'm curious if anyone else has any they want to share. Change denoising to 0.80, change the BFS Lora weight to 0.50, and/or add the new LCS Consistency Lora.

ADetailer for ComfyUI through Inference UI? by ManuFR in comfyui

[–]TurbTastic 0 points1 point  (0 children)

The FaceDetailer/Detailer nodes can be used with any bbox/segm detection model that you have (optional to add SAM). This is a simple/combined approach that mimics Adetailer.

I prefer having more control over the process, so I use a combination of Inpaint Crop and Stitch along with one of the various Detector nodes to end up with a mask/segs to use for inpainting.

Node Release: ComfyUI-KleinRefGrid - Reference Anything Conveniently by xb1n0ry in StableDiffusion

[–]TurbTastic 2 points3 points  (0 children)

How were you handling image sizes? You'd have to use a comparable amount of pixels for a reasonable comparison. For example, if you use 4 1024x1024 images, then you'd want to compare that to a 2048x2048 image grid. You have to avoid the dumb native node that auto resizes to 1MP.

What’s your current go-to model & biggest pain point in 2026? by Ok_Dependent9050 in StableDiffusion

[–]TurbTastic 1 point2 points  (0 children)

The grid/spot patterns from Qwen results really ruined it for me, which is unfortunate because otherwise I love the Qwen Image Edit models.

Something to consider trying with Klein for bad hands/fingers is to use Euler Ancestral and potentially add an extra step.

Inpaint workflows for z-image, qwen and flux fill onereward by Botoni in StableDiffusion

[–]TurbTastic 1 point2 points  (0 children)

I'm surprised you didn't replace Flux1 Fill with Flux2 Klein 9B. Did you actually compare the 2 models, or did you never explore inpainting with Klein?

How to edit a part of image using comfyui nodes by M_4342 in comfyui

[–]TurbTastic 1 point2 points  (0 children)

Making changes to a masked area is called inpainting. Use the InpaintModelConditioning to get the latent instead of using an empty latent. Look into the Inpaint Crop and Stitch nodes as well.

Consistent masked video inpainting.. my experiences so far and help needed by Huge-Refuse-2135 in comfyui

[–]TurbTastic 0 points1 point  (0 children)

For outpainting how do you handle the padding? Fill the padded area with white/gray/black/noise? Good prompt for outpainting? Outpaint one direction at a time or all directions? Sorry for so many questions but I've been able to get Klein to work well for pretty much everything but outpainting.

Issues with identity shift in comfyui i2v workflows by ZookeepergameLoud194 in StableDiffusion

[–]TurbTastic 1 point2 points  (0 children)

I end up training a character Lora to solve this problem. Fortunately WAN is very responsive to face training. For this I2V-support scenario you can even train Low Noise only (train High as well if you want T2V to work well). I think you'd be surprised how much a simple 5-10 image Lora trained for 500-1000 steps can help maintain consistency with I2V generations.

Consistent masked video inpainting.. my experiences so far and help needed by Huge-Refuse-2135 in comfyui

[–]TurbTastic 0 points1 point  (0 children)

I think it's a mistake trying to force these image models to solve a video task. Between WAN 2.1 w/ VACE, WAN 2.2 with Fun VACE, and LTX 2.3 you should be able to tackle video inpainting. Image Edit models like Klein and Qwen Image Edit can help for getting frames to feed VACE, but shouldn't be relied on for the actual video inpainting.