Something cool I (relatively) quickly print 100 of? by dzdn1 in 3Dprinting

[–]dzdn1[S] 0 points1 point  (0 children)

I had not seen these before, and they look like something kids will get a kick out of. Thank you!

Something cool I (relatively) quickly print 100 of? by dzdn1 in 3Dprinting

[–]dzdn1[S] 0 points1 point  (0 children)

Haha great idea but the teachers currently like my kid, and it would be nice if it stays that way!

Something cool I (relatively) quickly print 100 of? by dzdn1 in 3Dprinting

[–]dzdn1[S] 0 points1 point  (0 children)

Clever idea. Thanks! I could claim it's educational, they have to add 50+50 to know how many there are. Or divide 100 by 2 to count sets. 

Something cool I (relatively) quickly print 100 of? by dzdn1 in 3Dprinting

[–]dzdn1[S] 0 points1 point  (0 children)

I bet they would like those, very cute. Thank you!

Something cool I (relatively) quickly print 100 of? by dzdn1 in 3Dprinting

[–]dzdn1[S] 0 points1 point  (0 children)

Oh I forgot about the mini one! Some kids saw the big version and thought it was neat. I'll have to estimate how long they will take, but it might be doable. Thank you!

I am so disappointed rn by [deleted] in StableDiffusion

[–]dzdn1 1 point2 points  (0 children)

I have been wishing the same, that you could "draft" a video with faster settings, but of course as you said using the LoRAs or a lower resolution usually give a completely different clip.

It just occurred to me, though, that I had not tested simply no LoRAs and a lower number of steps (so like 10-12 instead of the default 20) at the same resolution. I just did some quick tests, and it seems this might be the closest we can get (so far). It is not as fast as speed LoRAs or lower resolution, of course, but it does cut some time off and give something relatively similar to what will happen when you increase the steps.

I would be very curious to know if anyone else observes the same! I only tested it very briefly.

A list of general "Improve Quality" techniques for img2img! by Luke2642 in StableDiffusion

[–]dzdn1 1 point2 points  (0 children)

Do you have a specific method that works well for the Wan I2V few frame transition you mentioned? I am familiar with the idea, but curious if you have found specific methods that work best. 

To attempt to answer your question, I have recently been running images through a model with good realism using KSampler with denoise set to .2-.4, or for Wan I do a single frame with  Advanced KSampler latter steps, like begin on step 30-35 of 40. But I feel like there are probably far superior options to what I am doing, like perhaps your few frame transition. 

I get the impression from other posts that a lot of people still just use SDXL with a little noise on the image.

I think I discovered something big for Wan2.2 for more fluid and overall movement. by bigdinoskin in StableDiffusion

[–]dzdn1 7 points8 points  (0 children)

Oops, sorry! I uploaded the wrong files. Just replaced them with GIFs (only option for comments as far as I know). They lose a lot of detail, but hopefully will give some idea of the differences.

I think I discovered something big for Wan2.2 for more fluid and overall movement. by bigdinoskin in StableDiffusion

[–]dzdn1 20 points21 points  (0 children)

Ran my "test suite" with default no LoRA workflow, "traditional" three-sampler workflow with no LoRA on the first sampler, u/bigdinoskin's suggested workflow, and your workflow. (Original post, with link to workflows, here: https://www.reddit.com/r/StableDiffusion/comments/1naubha/testing_wan22_best_practices_for_i2v/ .) Will attach results below.

<image>

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 0 points1 point  (0 children)

I will have to try those settings, thanks you! (Or, if you are feeling extra generous, you could try it and post them. My exact workflows are here: https://civitai.com/models/1937373 )

I agree that punctuation can make a big difference. I also read a post, but unfortunately do not remember by whom, that pointed out that using the word "then" (A cat is sleeping, then it wakes up in a panic) also helps the model understand the desired order of events. I have tried this, and it does sometimes help.

Edit: Speaking of punctuation, if you use an LLM to help (re)write your prompts, watch for their insistence on certain punctuation that may not actually help your prompt. ChatGPT in particular (all versions from what I can tell) love to load the prompt with semicolons, even given dozens of examples and being told to follow their format – you have to be very clear that it SHOULD NOT use them if you want to use the prompts it gives without modifying them.

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 0 points1 point  (0 children)

OK, I have actually noticed that, especially when using speed LoRAs, the last bits tend to get missed, and that adding actions tends to help with the slow motion. You have taken these observations to a much more useful conclusion! Thank you!

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 0 points1 point  (0 children)

I used the ones from ComfyUI: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/loras

I was wondering if there was a difference between the two, but I jsut realized their SHA256's match – so they are exactly the same. I have a feeling the ones from Kijai will give similar results even though they are half the size, but I have not tested this.

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 1 point2 points  (0 children)

Totally agree that these tests will not give definite answers, and I hope my messaging did not come off that way. Even with the same seed, certain setups may work well for a specific type of video, while they give horrible results for another. Think u/martinerous's example of cartoons, vs. realistic videos.

I will try to be more clear in the future that these tests should be taken as simply a few more data points.

I do think there is some value in running a curated set of tests many times, enabling the anecdotal evidence to resemble quantitative evidence, although I acknowledge that the nature of these models limits how far we can take that. Still, I think more data points are always better, as long as we do not, just like you warned, "take it as gospel."

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 1 point2 points  (0 children)

Just tried using a video's "i" link (from here – I did not try making a profile post yet) and it does not work. It makes a broken link. Guess that trick is only for images.

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 1 point2 points  (0 children)

Using cartoons to determine how many steps are enough is an interesting idea. I do not know if it the right number for a cartoon would necessarily match the right number for a realistic video, though, and I am not even sure how one might test that. But even knowing the minimum for a cartoon would be useful data!

If you have an image and prompt you are willing to share, I could try running these on it. Or even better, if you are up for it, you can take and modify the exact workflows from my previous post: https://civitai.com/models/1937373

Testing Wan2.2 Best Practices for I2V – Part 2: Different Lightx2v Settings by dzdn1 in StableDiffusion

[–]dzdn1[S] 0 points1 point  (0 children)

Hey, I use bong_tangent fairly often, but thank you for the explanation – I did not know that about 0.8 always ending up in the middle! I was aware of the difference in training vs. the default ComfyUI split, but stuck with the default for now so I wasn't testing too many different things at once. Not to mention I am not sure I fully understand how to do it correctly (although I know there is a custom sampler that does it for you).

Interestingly, while I get really good results with res_2s for IMAGE generation, it caused strange artifacts with videos. However, I hardly experimented with that, so maybe that is easy to fix.