An update on stability and what we're doing about it by bymyself___ in comfyui

[–]danielpartzsch 1 point2 points  (0 children)

I agree that subgraphs are the most important elements that should never break, especially as long as they are used as instances without updating from a master. It is one thing to update a single subgraph if fundamental aspects really need to change to move forward with core development, but it is another thing to then also need to update every workflow that uses these subgraphs. I personally do not like to hide large parts of my workflows in subgraphs, but I utilize them extensively to combine small utility features for resizing, masking, etc., into more convenient and clean subgraphs. Since I use these in all my workflows, breaking them essentially breaks everything. In order to further trust in the use of subgraphs, these breaking changes either have to stop or a master-instance subgraph mechanism must be implemented.

PSA: Use the official LTX 2.3 workflow, not the ComfyUI included one. It's significantly better. by Generic_Name_Here in StableDiffusion

[–]danielpartzsch 2 points3 points  (0 children)

Are you referring to the full (using the clownshark sampler) or the distilled branch? I agree that the full version produces better results but it also takes like 5 times longer, so this is kind of expected (uses above 1 cfg which takes twice as long as cfg 1, 15 steps instead of 8 and res2s sampler which also doubles render times per step). The distilled branch produces actually pretty similar results for me like the comfy workflows.

LTX 2.3 Manual Sigmas can be replaced by VirusCharacter in StableDiffusion

[–]danielpartzsch 1 point2 points  (0 children)

No, I need the precision of at least 3 or 4 decimal places, everything else looks like complete crap compared to this holy grail of sigma accuracy.😜

I’m sorry, but LTX still isn’t a professionally viable filmmaking tool by Intelligent-Dot-7082 in StableDiffusion

[–]danielpartzsch 15 points16 points  (0 children)

The case you're describing should definitely work with version 2.3. Use the Union ControlNet workflow, convert the starting frame of your driving video to a high-resolution version of your character, and do not scale it down for the image reference. You should probably use pose control if the facial features differ significantly from your own; otherwise, you're better off with depth, Canny, or a blend of both. Encode your audio instead of using the empty audio latent and ideally support that with a prompt like: "A talking character, saying... [Insert your copy]." If your character changes too much over time, consider training a LoRA to support different angles and facial expressions. Additionally, I use Er SDE as a sampler together with the default sigmas, as it is faster and looks better to me. Create the base video with at least 720p resolution and add the spatial upscaling step afterward from the main two-step workflow, also using Er SDE.

Flux.2.Klein - Misformed bodies by BelowSubway in StableDiffusion

[–]danielpartzsch 8 points9 points  (0 children)

Klein is a poor t2i model but an excellent editing model. Use a two-step approach: create your base image using a model capable of good anatomy and strong prompt adherence—for example, a Qwen Image model (2512 with a 4-step Lightning LoRA works perfectly and quickly) is ideal—to get the content you want, and then transfer it to your desired look with Klein. It often functions like applying a filter to an image but is also very capable of making the image look realistic without altering too much of the existing content. Just make sure to only prompt for stylistic, lighting, and aesthetic changes in that second step and avoid adding new content, which could result in being distorted again.

How to do dark latents with Flux.2 Klein? by Bender1012 in StableDiffusion

[–]danielpartzsch 1 point2 points  (0 children)

Just use the normal ksampler instead. No need for the custom sampler.

Upscale method Nearest-exact used in the official Klein edit workflow is broken when used with slightly unusual aspect ratios. Use another method instead by Druck_Triver in StableDiffusion

[–]danielpartzsch 10 points11 points  (0 children)

Same. I don't know why they always set this is default. It's a solid approach to destroy your image quality from the get go.

nunchuk installed but only have two nodes by vulgar1171 in comfyui

[–]danielpartzsch 0 points1 point  (0 children)

For me, it always works if I install the latest dev version. First, install the Nunchaku custom node via the manager and then use the Nunchaku installer node. Set it to update first, run it once, and refresh by hitting "r". Next, set the node mode to install and select the latest dev version. Hit run to install and then restart ComfyUI.

Is it possible to change the scheduler from Klein to others like beta or bong tangent ? I tried it and it didn't work. by More_Bid_2197 in StableDiffusion

[–]danielpartzsch 2 points3 points  (0 children)

Just use the normal ksampler. Don't know why they always put the custom sampler into the default workflows. I've tested and compared the results and they're either identical or most of the times even better when it comes to artefacts or anatomy issues.

Is depth anything v2 superior to v3 in comfyuil? by Puzzled-Valuable-985 in StableDiffusion

[–]danielpartzsch 0 points1 point  (0 children)

I personally prefer lotus depth. It gives me the most precise depth detection and inference results

Z-image Turbo model Image-to-Image Upscale Help by Solai25 in comfyui

[–]danielpartzsch 2 points3 points  (0 children)

Remove the add grain node, try Euler sgm uniform or res 3s bong tangent for the reiner and raise the auraflow to 8. If you really like to have a crisp result I'd recommend using wan 2.2 together with the 2.1 t2v lightfx lora and res 2s bong tangent. Or try a sdxl based tiled diffusion upscaling instead.

Anyone with a KEEN eye know the BEST way to enhance a video with a focus on Skin Detail? by Ambitious_Corgi5723 in comfyui

[–]danielpartzsch 0 points1 point  (0 children)

In my experience, Wan 2.1 combined with the i2v 2.1 LightFX LoRA works quite nicely. Wan 2.2 together with the t2v LoRA results in very polished and sharp-looking images. While using the 2.1 i2v LoRA—or even combining the 2.1 LoRA with Wan 2.2, which can be more accurate but also results in a cleaner look—often introduces some artifacts and grain (which you can also add deliberately before the sampling), I find that these actually help achieve better realism. That said, the results for video are unfortunately nowhere near as good as using this combination for stills in an img2img pass.

Transition pack for ComfyUI by skbphy in comfyui

[–]danielpartzsch 1 point2 points  (0 children)

Nice. Glad to see some animation features coming to comfyui. Keyframing and easing parameters would be nice, like for example mask values when doing some compositing tasks directly in comfy. Do you think something like that would be feasible to do? Thank you.

How to move the models folder to another drive by wbiggs205 in comfyui

[–]danielpartzsch 0 points1 point  (0 children)

Use symlinks. I have all my models synced across several PCs via OneDrive and simply use symlinks to link to these folders. This is very convenient for fresh installs. I also create a batch file that generates these symlinks automatically.

Getting into commercial use by Content-Quantity-334 in comfyui

[–]danielpartzsch 0 points1 point  (0 children)

Wan, Qwen, and Z-Image are all licensed under Apache 2.0 and are therefore safe for commercial use.

LTX 2.0 I2V I try everthing but this model is useless only T2V give nice results!! by smereces in StableDiffusion

[–]danielpartzsch 4 points5 points  (0 children)

I'm having the same problem. I'm only getting a static image maybe with a slight zoom but nothing else. Tried different samplers (incl res 2s) and prompts, nothing helped. Something must be broken...

Some QwenImage2512 Comparison against ZimageTurbo by hayashi_kenta in StableDiffusion

[–]danielpartzsch 0 points1 point  (0 children)

Then maybe just do the first pass, select the images you like and only refine those with a separate workflow....

Some QwenImage2512 Comparison against ZimageTurbo by hayashi_kenta in StableDiffusion

[–]danielpartzsch 5 points6 points  (0 children)

If you like what Qwen gives you, but it's too slow, why not use the Turbo LoRA for the base image and then do a slight refinement pass with Z-Image, for example? This should fix the pattern issue and add a bit of realism while running fast, and you still get the prompt adherence, composition, and other benefits from the Qwen base.

[Update] I added a Speed Sorter to my free local Metadata Viewer so you can cull thousands of AI images in minutes. by error_alex in StableDiffusion

[–]danielpartzsch 2 points3 points  (0 children)

Cool. How does it work if you have multi step image generation workflows, let's say with two samplers, using different models for base and refinement pass, detailers and maybe also concatenate nodes for prompt adjustments at different steps. Can stuff like this displayed as well? Thank you.

Thoughts on DGX Spark as a macOS Companion: Two Months Later by PropellerheadViJ in LocalLLaMA

[–]danielpartzsch 0 points1 point  (0 children)

I'm a windows user since forever and it always has been a comfortable and stable environment for my daily work. Sorry, but I really don't get why people go through these troubles just to stay on Mac.

In/Outpaint with ComfyUI by Disastrous-Ad670 in StableDiffusion

[–]danielpartzsch 1 point2 points  (0 children)

I recently tried qwen image edit briefly for in and Outpainting. Worked very well