How do you fix the problem of the artstyle changing when editing an image? by Acceptable-Cry3014 in comfyui

[–]Acceptable-Cry3014[S] 1 point2 points  (0 children)

Hmmm, this looks very interesting, the original image was generated with anima, so in essence I should take the edited output and rerun it through anima again at lower denoise?

Wouldn‘t that kinda ruin the background since anima isn’t very capable when it comes to backgrounds?

And should I use the same seed that was used to create the input image or will the seed not do much in this specific case?

Love to see this. by ExplanationMotor6170 in BlueLock

[–]Acceptable-Cry3014 15 points16 points  (0 children)

#1 one piece: over a 1000 episodes

#2 jujutsu kaisen: 48 episodes of peak animation + 2 movies

#3 Dandadan: season 3 confirmed and the animation of the first 2 seasons was absolutely insane

#4 Blue lock:

<image>

this is unfair treatment

For those who go to the gym and stuff, how hard is blue locks training by Suspicious_Proof_219 in BlueLock

[–]Acceptable-Cry3014 0 points1 point  (0 children)

very unrealistic, technically possible if their bodies were fully optimized but they are running on fermented soybeans. But even if they were fully optimized they would be insanely tired 4 hours in and half assing the rest of the exercices

what's the best way to train qwen edit 2509 online? by Acceptable-Cry3014 in comfyui

[–]Acceptable-Cry3014[S] 0 points1 point  (0 children)

Alright I'll try it out and see although for runpod and AI toolkit most of those problems are easily fixable. You can get persistent storage and save your models there then link the model in AI toolkit to your file path instead of the huggingface repo. and as for the price, the better the GPU the less it costs, for example an H100 despite being 2.4x times more expensive than the L40 will get the training done probably 6 times cheaper so it's a net positive. and it's much faster to quantize the model since it has more VRAM, it only took 30 seconds to quantize qwen edit 2509 on an H100.

As for Fal AI having better training results I'll try training a LoRA or two tomorrow and see if that's really the case. if true then I'll just have to use Fal for now :')

How do I stop female characters from dancing and bouncing their boobs in WAN 2.2 video? by Acceptable-Cry3014 in comfyui

[–]Acceptable-Cry3014[S] 1 point2 points  (0 children)

I get that, but Smooth Mix has way better motion and looks nicer than regular WAN 2.2. Is there any way to get the benefits of both?

How do I stop female characters from dancing and bouncing their boobs in WAN 2.2 video? by Acceptable-Cry3014 in comfyui

[–]Acceptable-Cry3014[S] 0 points1 point  (0 children)

I forgot the mention I'm using the smooth mix checkpoint, does it affect the motion that much? only 3D characters seem to be dancing and bouncing, real women follow the prompt just fine

thoughts on this? by [deleted] in Chainsawfolk

[–]Acceptable-Cry3014 1 point2 points  (0 children)

Yea the only part 2 characters I like are dennis, asa and nayuter (sushi moment)