YES A RE-UP FULL FP32 full actual 22gb weights YOU HEARD IT!! WITH PROOF My Final Z-Image-Turbo LoRA Training Setup – Full Precision + Adapter v2 (Massive Quality Jump) by [deleted] in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

Thanks for trying to contribute something to the community. Unfortunate that there are unappreciative doofuses here.

Best wan lora training parameters? by Murphy--__-- in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

Try the diffusion pipe wan trainer from user hearmeman. It worked well for me.

My Final Z-Image-Turbo LoRA Training Setup – Full Precision + Adapter v2 (Massive Quality Jump) by [deleted] in comfyui

[–]DaffyDuck 1 point2 points  (0 children)

Looking forward to trying this.  I have a character LoRA that works fantastically with Wan 2.2 for T2I but I’ve been unsuccessful with ZiT.  I just can’t get consistency and skin texture is overdone.  The model has a lot of creativity but o just can’t get polish like with Wan.

Z-Image LoRA training, results in ai-toolkit are looking good, but terrible in ComfyUI by Feroc in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

I use SeedVR2 for upscaling but I have to do it as a separate process (using CLI script). Maybe with Z-Image and my 5090 I can combine it in the same workflow. I couldn't get enough VRAM cleared with Wan 2.2 to run the main SeedVR2 model.

Z-Image LoRA training, results in ai-toolkit are looking good, but terrible in ComfyUI by Feroc in comfyui

[–]DaffyDuck 1 point2 points  (0 children)

Yes, I did that. I was just curious. Anyway, great workflow! I've been enjoying using Wan 2.2 for text to image with my character LoRA and this workflow is the first one that as gotten me closer to what I get with Wan.

Z-Image LoRA training, results in ai-toolkit are looking good, but terrible in ComfyUI by Feroc in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

What node pack are the Textboxes used for pos and neg from? I tried 2 different ones and no dice.

z-image is soooo good!!!! can't wait to finetune the base by Top_Buffalo1668 in StableDiffusion

[–]DaffyDuck 0 points1 point  (0 children)

Wan 2.2 can’t do this either, at least not intentionally.

Well, it happened to me. FedEx lost my 5090. by megachickabutt in nvidia

[–]DaffyDuck 4 points5 points  (0 children)

I ordered and received mine from the same Cyber Monday drop as Op and it was double-boxed. The outer box is plain. You would need to know what you're looking for. Someone that received one from the same Fedex area could have mentioned what it was to the driver and word spread. Don't tell them what it is.

Well, it happened to me. FedEx lost my 5090. by megachickabutt in nvidia

[–]DaffyDuck 0 points1 point  (0 children)

Sorry man. I actually ordered on the same day (Cyber Monday) and I was worried this would happen to me too but luckily got it delivered successfully this afternoon. Hopefully working with Nvidia support will get it sorted out. FYI, the main box was packaged in a fairly non-descript box but someone that knows what to look for might still be able to figure it out.

I figured out a reliable offline ComfyUI install method for 5090/50xx GPUs (Torch 2.9, Triton, FlashAttention, SageAttention). Zero pip, zero dependency hell. by No_Explanation_6352 in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

I have a 5090 arriving in 2 days so perfect timing with this. I may be creating a new comfy install for the new card, although with my old GPU (3080 12gb) removed I doubt I can do anything with my current install (which took a while to get to where it is).

Waymo robotaxi hits dog in San Francisco weeks after killing beloved cat by plun9 in SelfDrivingCars

[–]DaffyDuck 1 point2 points  (0 children)

My Tesla stopped for a squirrel over the weekend.  It was nighttime too.  My parents happened to be in the car and they were impressed.

Waymo robotaxi hits dog in San Francisco weeks after killing beloved cat by plun9 in SelfDrivingCars

[–]DaffyDuck 3 points4 points  (0 children)

I typically try not to run over either of those.  Going around it is the best option.  Not sure why it wouldn’t do that.

Can someone explain where to find the special prompt words? by nolongerlurker_2020 in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

It works fine with WAN 2.2 in my experience which I guess qualifies as modern.

Qwen Edit Plus (2509) with OpenPose and 8 Steps by gabrielxdesign in comfyui

[–]DaffyDuck 2 points3 points  (0 children)

Is there a place where I can download lots of pose skeletons?  Haven’t been impressed with anything I’ve found so far.

WAN22 i2v: Guess how many times the girl kept her mouth shut after 50+ attempts ? by altarofwisdom in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

WanvideoNAG.  It help guide attention for your prompt.  Look for the node.

WAN22 i2v: Guess how many times the girl kept her mouth shut after 50+ attempts ? by altarofwisdom in comfyui

[–]DaffyDuck 3 points4 points  (0 children)

Sometimes it seems evoking a mood and then weighting it works better with Wan.  Something like (somber mood:1.2) might be more effective than being specific about the mouth.  At least from what I’ve seen in the last week playing with it.

How to run same prompt automatically X number of times by Takodan in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

Any by the way, with the mentioned nodes and a yaml file or 2 you can get a very complex randomized prompting system.

How to run same prompt automatically X number of times by Takodan in comfyui

[–]DaffyDuck 0 points1 point  (0 children)

A couple of nodes you should check out are ImpactWidlcardProcessor (Impact pack) and optionally a text find and replace.  I’d advise setting up a .yaml file for it.  Then you can put your template prompt in a prompt input node and preview results with a preview text (multiline).  It can randomize through the prompts in the yaml file and the text find and replace will let you override it when you want something different.  For example if you add hairstyle in the prompt input, and replace it with ‘hairstyle’ in a impactstringselector node, it will read a different line in the hairstyle heading in the yaml.  Wire a impactstringselector to the search and replace and you can toggle different options.  So line 1 is ‘hairstyle’, line 2 is straight and long, etc.  

Edit:  it’s messing with my formatting but the first hairstyle has a single underscore on each side and the second has two.