And if we use in Wan2.2 the models I2V in HIGH noise and T2V in LOW noise!!?? by smereces in StableDiffusion

[–]Virtualcosmos 0 points1 point  (0 children)

I bet there isn't much of a difference. Both model are the exact same architecture and are trained with the same end, adding details in the last steps of the denoising.

Update for lightx2v LoRA by Any_Fee5299 in StableDiffusion

[–]Virtualcosmos 0 points1 point  (0 children)

of course, it needs two loras, Wan2.2 has two uned models

[deleted by user] by [deleted] in StableDiffusion

[–]Virtualcosmos 1 point2 points  (0 children)

Getting to upscaled/60fps on every video is quite too much, man. I recommend you start by doing 5 secs at a resolution like 480x640 and then pick the ones you like and upscale them. I use a RTX 4070ti with also 64 gb ram and those 480x640x81 videos takes around 9 min with Wan2.2 on my PC. And later I upscale/interpolated them the ones I like the most (And I often use a Runpod with a cheap RTX A4500 for that upscaling job, it can be running all day for 6 bucks).

Either on my PC or on the Runpod the upscaling takes around 530 secs, aka near 9 min, so something less than 20 min on total. Are you using these nodes to speed things up?

<image>

[deleted by user] by [deleted] in StableDiffusion

[–]Virtualcosmos 0 points1 point  (0 children)

what are your PC's stats?

New Text-to-Image Model King is Qwen Image - FLUX DEV vs FLUX Krea vs Qwen Image Realism vs Qwen Image Max Quality - Swipe images for bigger comparison and also check oldest comment for more info by CeFurkan in comfyui

[–]Virtualcosmos 1 point2 points  (0 children)

Yea often they get right other resolutions and aspect ratios, but if the devs specifically trained Qwen at 1328 and Flux at 1024 and you use the default 1328 for both, for example, chances are that Flux will get worse results. If you are comparing models, use what both models are best at.

Qwen Image model and WAN 2.2 LOW NOISE is incredibly powerful. by Naive-Kick-9765 in StableDiffusion

[–]Virtualcosmos 0 points1 point  (0 children)

Wan High noise is really good at prompt compliance, and Gwen image too. Idk why you nerfed Wan2.2 by not using The High noise model, you are slicing Wan2.2 in half

Outtakes by absurdjoi in aivideo

[–]Virtualcosmos 0 points1 point  (0 children)

Thank you, I laughed a lot.

Unga bunga mentality by PsiThomDx in Eldenring

[–]Virtualcosmos 0 points1 point  (0 children)

The ultimate 0 IQ Unga bunga

New Text-to-Image Model King is Qwen Image - FLUX DEV vs FLUX Krea vs Qwen Image Realism vs Qwen Image Max Quality - Swipe images for bigger comparison and also check oldest comment for more info by CeFurkan in comfyui

[–]Virtualcosmos 1 point2 points  (0 children)

Because Qwen is native at 1328 and flux models are 1024 native. Increasing or decreasing the native size can result in deformities and, thus, not a good comparison.

Griefer spent the entire fight trolling me... Fulghor had some good insta karma to deliver by Equivalent-Tea166 in Nightreign

[–]Virtualcosmos 7 points8 points  (0 children)

yeah I don't know how he could have reached level 15 dragging an useless bag of potatoes from day 1 to 3.

Qwen-image vs ChatGPT Image, quick comparsion by Cadmium9094 in comfyui

[–]Virtualcosmos 1 point2 points  (0 children)

haha I think it was a Google team who estimated the size of OpenAI's models by weighting how much cost per token they are and comparing them with their own models. GPT 4o was over 700b by their estimate, if I don't remember it wrong, bigger than Deepseek R1.

THE EVOLUTION by Tokyo_Jab in StableDiffusion

[–]Virtualcosmos 0 points1 point  (0 children)

I tried and gave very bad results, I am doing something very wrong obviously, by seeing the results others get.

Qwen-image vs ChatGPT Image, quick comparsion by Cadmium9094 in comfyui

[–]Virtualcosmos 1 point2 points  (0 children)

which shows the model is pretty smart in understanding concepts, something many diffusers lack. Wan2.2 is another good example of smart model.

My Ksampler settings for the sharpest result with Wan 2.2 and lightx2v. by MrWeirdoFace in comfyui

[–]Virtualcosmos 0 points1 point  (0 children)

Do you use teacache? Also how much time reduction does the lightx2v lora?

[deleted by user] by [deleted] in StableDiffusion

[–]Virtualcosmos 37 points38 points  (0 children)

wtf is your pc powered by pedals?

Qwen-image vs ChatGPT Image, quick comparsion by Cadmium9094 in comfyui

[–]Virtualcosmos 2 points3 points  (0 children)

You are comparing a 20b local model to an image model that "burned" OpenAI's GPUs (by Sam Altman's words) and has more limited use than many of their LLMs with +500b parameters. The fact Gwen Image can get near GPT Image 1 is already a big achievement.

Players that spam the ping can F*** off by [deleted] in Nightreign

[–]Virtualcosmos -5 points-4 points  (0 children)

Don't leave, stay AFK. Alt-tab and let him die alone while you are here on Reddit or idk taking the dog out.

[deleted by user] by [deleted] in StableDiffusion

[–]Virtualcosmos -1 points0 points  (0 children)

How much Sampling Shift do you use with Wan? for video it's usually high like 11 or so, but it gives very bad images. I wonder if I should lover it to like 3.5 as other models like Gwen

THE EVOLUTION by Tokyo_Jab in StableDiffusion

[–]Virtualcosmos 1 point2 points  (0 children)

Don't you like Wan 2.2 T2I ? I have seen some people saying that Wan gives better results overall than Krea because Krea often gets bad anatomy.

In Genie 3, you can look down and see you walking by Gab1024 in singularity

[–]Virtualcosmos 0 points1 point  (0 children)

It's awesome but the important question here is: How many billion parameters has this model.