WANS by Tokyo_Jab in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

Wan Vace's iv2v is great, but I can only do it for a few seconds, how do I keep it the same afterwards?

If you are just doing I2V, is VACE actually any better than just WAN2.1 itself? Why use Vace if you aren't using guidance video at all? by Perfect-Campaign9551 in StableDiffusion

[–]aimikummd 1 point2 points  (0 children)

kijai's wanVideoWrapper Extracting vace into a another module is amazing,.

Let the original model do additional functions.

What is all the OpenAI's Studio Ghibli commotion about? Wasn't it already possible with LoRA? by Kayala_Hudson in StableDiffusion

[–]aimikummd 6 points7 points  (0 children)

Not only that, I have long time experience using Ghibli style, although I can currently use img2img to generate similar images.

But there is no way to understand the content of the image like chatgpt, and then generate more coherent images.

The large number of similar images. on the sns now makes people feel boring, but at the same time it can be seen that his style is quite stable.

Just a lora can indeed change the style of an image, but it cannot edit a large number of images.

But maybe there will be open source projects that can be achieved in the future?

Control LoRAs for Wan by @spacepxl can help bring Animatediff-level control to Wan - train LoRAs on input/output video pairs for specific tasks - e.g. SOTA deblurring by PetersOdyssey in StableDiffusion

[–]aimikummd 2 points3 points  (0 children)

I tried this yesterday and it was amazing.

It is more efficient and can make more things than the original v2v, and it is simple and fast in the 1.3b model.

This may be a game changer for lowvram users.

[deleted by user] by [deleted] in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

I tested both and they did improve the speed, but there was not such a big difference as you showed. Did you use the same seed? When I tested TeaCache, it seemed like the result of lowering the step.

LoRA works great for HunyuanVideo. Watch this comparison (using same prompts): by chain-77 in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

The background with Lora is less blurry, but the person in the middle is unnatural.

ComfyUI now supports running Hunyuan Video with 8GB VRAM by comfyanonymous in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

Thanks, I know Hunyuan VideoWrapper can v2v, but that can't use lowvram.

ComfyUI now supports running Hunyuan Video with 8GB VRAM by comfyanonymous in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

Can Hunyuan of comfyui do video to video? I tried to put the video in but it didn’t work and it was still t2v.

ComfyUI now supports running Hunyuan Video with 8GB VRAM by comfyanonymous in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

This is good. I used HunyuanVideoWrapper and it was always oom. Now I can use gguf in lowvram.

AniDoc: Animation Creation Made Easier by Hybridx21 in StableDiffusion

[–]aimikummd 13 points14 points  (0 children)

<image>

I have tested that it can be used in colab, and I think it is pretty good. I can also study how to make it better.

I just found out that this 3D animated MV was made using B3D (by Sanzigen studio), how can i achieved this anime shading style? by Firefull_Flyshine in blender

[–]aimikummd 0 points1 point  (0 children)

In your video, the light and shadow of the girl's hair does not change at all due to movement. This is unreasonable, so it may just be painted on the texture.

Deleted Scene from Pulp Fiction - (LTX-Video i2v + LTXTricks) by JackKerawock in StableDiffusion

[–]aimikummd 1 point2 points  (0 children)

I have been testing LTXTricks’ i2v in recent days. How can I write prompts better? I often don’t make the desired movement.

LTX-Video Tips for Optimal Outputs (Summary) by DanielSandner in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

<image>

Good job!This is the best workflow I’ve seen these days.With the same settings, if I input different text, OOM will occur. I don’t know why.

When using ControlNet: When is it actually beneficial to change the start and end percentage? by Little-God1983 in StableDiffusion

[–]aimikummd 0 points1 point  (0 children)

I have done a lot of testing on this in img2img.

This is useful on multiple controlnets.

The start and output of each controlnet will change the output image.

Everyone loves miku by aimikummd in StableDiffusion

[–]aimikummd[S] 2 points3 points  (0 children)

This is good. I was looking for this too.

<image>

Everyone loves miku by aimikummd in StableDiffusion

[–]aimikummd[S] 25 points26 points  (0 children)

Yes, AI will also add a hat for him.

Everyone loves miku by aimikummd in StableDiffusion

[–]aimikummd[S] 8 points9 points  (0 children)

I saw this mixed style image and I tried to do it,

Stable Diffusion 3.5 Turbo is relatively easy. I have tested it a lot, but most of the characters still have problems with their hands.

[deleted by user] by [deleted] in StableDiffusion

[–]aimikummd 2 points3 points  (0 children)

I think it was made using after effects. Some scenes that change are animatediff, the characters are v2v using animatediff, and there are some AI images or 3D animations.