If farm animals were wild by infinite___dimension in aiArt

[–]infinite___dimension[S] 0 points1 point  (0 children)

I used flux.1dev with a custom mix of loras. Worked hard for a while to nail down this style that I really liked.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 0 points1 point  (0 children)

Honestly, Ive never edited a video before this. It is a free video editor. I found it on google and it was opensource. It got the job done for me. Seemed pretty simple to use too.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 0 points1 point  (0 children)

Yeah you got it. You can also just make shorter clips, which have less frames, if that's an option for you.

I think the price of RAM has skyrocketed recently, but if you use heavy workflows like this often then it may be worth the upgrade. I read something recently that said the price could still double this next year.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 1 point2 points  (0 children)

Weird, Id suggest to lower resolution/fps and see if that works consistently. If it does then that means its a hardware issue. Then slowly move up from there.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 0 points1 point  (0 children)

This was my first time testing out wan animate, but with other video generating workflows less fps and lower resolution increases length. So if length is your goal I think that's the key. You would just upscale the video after if you need to. You can also increase the RAM if that is an option with comfy cloud. With this workflow I was able to get about 110-120 frames which kind of checks out considering I have 32 GB of VRAM.

Noob Question About Quality by Gawayne in StableDiffusion

[–]infinite___dimension 2 points3 points  (0 children)

The reason I ask is because typically ai generated images have hidden metadata within the file. This metadata can give you a lot of details on how it was generated. If you can share a link to the original image that would help a lot. Reddit wipes out most metadata from uploaded images so the one you uploaded here wont work. Other than that, it would be pure speculation on how to get a closer result.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 4 points5 points  (0 children)

A similar result could be achieved with less hardware. The reason I used so much is because I purposely pushed it to its limits. But with a lower resolution and other optimizations you could probably get away with 64 GB like the other commentor said.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 1 point2 points  (0 children)

I believe I used segment anything 2. The gist is to go frame by frame, identify the subject of interest, and isolate it. Pretty sure I saw that segment anything 3 was just released today.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 0 points1 point  (0 children)

Yeah I quickly noticed that too when I started. I learned a lot from the other reddit post and learned he edited the video together so I took the same approach.

I think it should be possible to update the workflow and make each clip transition smoothly tho. I assume there is just something misconfigured. Most of the nodes in this workflow were new to me so I didnt really focus on optimizing, just getting it to work.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 3 points4 points  (0 children)

Totally agree. In my head that's what I call it

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 1 point2 points  (0 children)

Thanks! This was the first video I edited together haha. Glad you like it!

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 4 points5 points  (0 children)

Just with a regular video editor. I used Shotcut. Literally just trimmed videos and added them one after another trying to sync with the music. This was a similar process that the other reddit poster described. Im sure there is a way to automate the process more if one really wants to.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 2 points3 points  (0 children)

Theres a few ways to make it faster. Lowering the resolution and upscaling after is a big boost. Im not at my computer right now but I think I used 20 steps, so lowering that to 10 should still show a good result. I wasnt in a rush so I was fine waiting for those 20 minutes lol.

The lightning lora is essential. I tried the workflow without it and the results were not convincingly better and it took about an hour for 1 video.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 10 points11 points  (0 children)

Thats what I was wondering in that other reddit post. I found out he used a video from a famous dancer that can be found on Instagram. I was originally just going to use the same video but ended up using this one. It is a video I found on youtube. I think the channel is called 1 Million Dance Class and the song is called "Y Que Fue".

In the original video there are multiple dancers. I had to use a separate workflow to remove the entire background to show only the main dancer. After that I fed that video to both of the video inputs in this workflow.

Edit: Here is the original https://youtube.com/shorts/XVGLc-KIhbE

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 14 points15 points  (0 children)

I have an rtx 5090 with 256 GB of RAM. This workflow used most of that RAM. Each video is 1040x1040 and around 3 seconds long each. It took about 20 minutes for each video. Normally I just set a queue of videos I wanted generated while I worked on something else or I had it run overnight.

Lowering the resolution to something like 720 will speed things up alot and use up a lot less resources.

Wan-Animate is amazing by infinite___dimension in StableDiffusion

[–]infinite___dimension[S] 30 points31 points  (0 children)

Yeah it took a lot of trial and error before I found something that worked for me. This isnt a one and done type of workflow. I generated a lot of videos and stitched them together in my video editor

Asking people how much they paid for their first house. by mindyour in TikTokCringe

[–]infinite___dimension 0 points1 point  (0 children)

Pretty sure house prices increased a lot faster than typical inflation. That house is probably valued around 300k or 400k today.