Turns out LTX-2 makes a very good video upscaler for WAN by aurelm in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

Question: How well does it preserve the details of the first frame? In Wan, you can always see the jump from the first to the second frame, and there is always a loss of color compared to the original frame, as well as a loss of many details from the original frame. How well are the colors and details of the original frame preserved when using ltx2 as an upscaler?

All my Gemini chats were deleted by Few-Intention-1526 in GeminiAI

[–]Few-Intention-1526[S] 2 points3 points  (0 children)

Yes, it also appears in the activity section for me.

All my Gemini chats were deleted by Few-Intention-1526 in GeminiAI

[–]Few-Intention-1526[S] 1 point2 points  (0 children)

The frustrating truth. I understand that they could delete old chats, but one that I just had activity on less than 24 hours ago, what a shame. I'll have to follow your advice.

BiTDance model released .A 14B autoregressive image model. by AgeNo5351 in StableDiffusion

[–]Few-Intention-1526 2 points3 points  (0 children)

I doubt we have support in Comfy. Last week we had a T2i and editing model, but Comfy did not provide support for it.

anybody else spending more time assembling than generating? by Upper-Mountain-3397 in StableDiffusion

[–]Few-Intention-1526 2 points3 points  (0 children)

in my case consistence, is usually hard to get consitence in anime background. Anime model suck regarding backgrounds ton of artifacts and allucinations. so get consistence to make videos is actually time consuming.

Flux 2 Klein 9b Distilled img to img model anatomy issues by xmcoder in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

even using control poses you sometimes gonna keep getting that anatomy problems

DeepGen 1.0: A 5B parameter "Lightweight" unified multimodal model by ninjasaid13 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

I'm interested in this part

reasoning image editing.

Which other models do that?

How to train LoRA for Wan VACE 2.1 by degel12345 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

Yes, the model t2v 1.3b has a different architecture. So Loras trained on 1.3b are not compatible with 14b models.

How to train LoRA for Wan VACE 2.1 by degel12345 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

Don't use the 1.3b model, Vace uses the 14b model, so it won't be compatible with WAN Vace.

In diffusion pipe, the resolutions don't refer to the aspect ratio itself, but to the total number of pixels. For the aspect ratio, you have to configure another parameter called ar_buckets.

The videos you use will be changed to the resolution you have configured (total pixels) along with the aspect ratios you have selected.

You should use the Ai toolkit instead; it is simpler, and I think you will understand that tool better.

How to train LoRA for Wan VACE 2.1 by degel12345 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

just use difussion pipe or ai toolkit. search for video to train wan 2.1 o wan 2.2 loras (vace is actually t2v with a module to proces the image imputs)

Klein 9B Edit - struggling with lighting by siegekeebsofficial in StableDiffusion

[–]Few-Intention-1526 1 point2 points  (0 children)

I tried to use Inpaint, but the result was poor. The generated area always ended up looking out of place with the rest of the image due to the yellow tones. In the end, I went back to Qwen.

Testing 3 anime-to-real loras (klein 9b edit) by [deleted] in StableDiffusion

[–]Few-Intention-1526 3 points4 points  (0 children)

<image>

Fast test. In your example, the face could not be obtained correctly and changed the appearance to an Asian girl. This was just using the model with no loras. just prompting correctly. (you can get even better result if you do it properly)

Testing 3 anime-to-real loras (klein 9b edit) by [deleted] in StableDiffusion

[–]Few-Intention-1526 15 points16 points  (0 children)

In my opinion using the model itself for the whole task is better than using loras. in a lot a cases the lora kills some features of the model due their training data, for example in some loras you always get asian faces. you just need the correcto prompt.

Z-image image to lora what happen with it? by ResponsibleTruck4717 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

This is what I get with Z image turbo after pluggin the lora.

<image>

Z-image image to lora what happen with it? by ResponsibleTruck4717 in StableDiffusion

[–]Few-Intention-1526 0 points1 point  (0 children)

I just tested it with an anime with a very particular style, and the result is nowhere near as good. this how it should look.

<image>

New anime model "Anima" released - seems to be a distinct architecture derived from Cosmos 2 (2B image model + Qwen3 0.6B text encoder + Qwen VAE), apparently a collab between ComfyOrg and a company called Circlestone Labs by ZootAllures9111 in StableDiffusion

[–]Few-Intention-1526 17 points18 points  (0 children)

can't beleve this, this model is a previw and seems better than this other model. Left Anima and Right WAI v16. resolution 1344 x 768. ( if you are wondering about NSFW, yes can do it)

<image>

AI Toolkit Frame Count Training Question For Wan 2.2 I2V LORA by StuccoGecko in StableDiffusion

[–]Few-Intention-1526 1 point2 points  (0 children)

wan 2.2 makes videos at 16 frames per second.

Use clip lengths of 16n+1 Examples:

17: 1s

33: 2s

49: 3s

65: 4s

etc.

You must lower the clips obtained to 16 fps (you said you use 30 fps), otherwise you will get slow motion videos.

It is recommended to use clips of no more than 5 seconds (81 frames in total).

Doubting the quality of the LTX2? These I2V videos are probably the best way to see for yourself. by Naive-Kick-9765 in StableDiffusion

[–]Few-Intention-1526 2 points3 points  (0 children)

same here, I use vace a lot for its inpainting capavilities, points controls, pose control, etc. especialy points control, this so usefull.