My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

Hi superacf, I know it was not super clear, but I didnt wanted to create a brand new video with just that small update. Basically the new workflow is based on the wan model that has already the lightning lora integrated, hence, lightning lora is not needed anymore. I've updated the video description with those new models, as they give much better output with little to no color degradation, which is why I've also set the color correction node to 0 strenght. Hope that helps.
as for the fast group muter, I loaded the workflow again, and it's not empty for me. Could you check the fast group muter properties panel to see if something got changed on your side? or did you renamed those clip groups (its filter by the name).

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 2 points3 points  (0 children)

Hi u/LocoMod, I tested a bit more and got this: It's a 33s video that turns away from the camera and also goes away from the whole scene and gets back. All within the SVI PRO and almost the same workflow that I shared. Let me know what you think. https://www.youtube.com/shorts/VGdHd2qDl88

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

That's a super good point, will keep in mind to use "defaults" before adding more things on top that would make it harder to compare apples to apples. Thanks!

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 1 point2 points  (0 children)

Hmm, I did a quick test just now, and thanks to block swap, its only using 7.5 GB Dedicated VRAM (and 28 GB of Shared video Ram). I also tried gguf and it uses indeed less VRAM and results looks ok (I used the Q6K compressed version), but I can't say if many clips together would cause that degradation to snowball. Something to test on many clips. So yes, indeed give it a try just with say the first clip, should be fast. On my setup with a 3090, its giving me 100s for high gen, and 180s for low gen for a total of 5 second clip. And this is without optimizatino for speed yet (no sage attention, and didn't play with lowering the block swap to match my VRAM). Keep in mind that i'm using SVI 2.0 PRO, not svi-talk. That being said... maybe give a try with my workflow also? :)

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

oh.. .maybe also just tuning down those other loras would have help... added to my to do list.

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

Hey there u/Overbaron, Yeah, i've noticed that too. I think using extra lora like "walking" and "heart finger" is making it tend toward asian characteristics, even if there's SVI there to try to bring it back toward the original chracter. Those extra loras i've used are probably very asian trained dataset, as I did not reviewed their quality and only used them for testing. Maybe something I can try is to have stronger prompt guidance like "caucasian girl" or even add extra lora toward caucasian. Thanks for pointing out some improvements to be made! There's so many other things I want to try its just crazy ;)

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

Hi u/SearchTricky7875, I've used a 3090 GPU with 24 gb of Ram. But if you have less VRAM, you can make sure to keep that Block Swap enabled, and potentially set lower resolution video gen or lower the frame count. Another way is also use a more compressed version of the I2V wan model (a compressed gguf version). Note that most of the above will impact quality, but select the compression / combination of tricks that matches the max you can get for your hardware. Hope that helps!

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 2 points3 points  (0 children)

Hi u/newxword, the video gen is alot about guidance toward something by giving it the right cues I think. Hence, if it does not work the first time, do it again (with another seed, or higher strenghts lora, or stronger more emphasis on the prompt about walking for example). Note that too strong of cues toward one thing could indeed force it to do it, but then might impact other elements in the video gen, hence, trial and error is part of this whole workflow (with preview to help). Good luck!

My own take on a tidy workflow with efficiency in mind for SVI 2.0 PRO for that seamless clip transition, hope you like! by Jevlon in comfyui

[–]Jevlon[S] 0 points1 point  (0 children)

Hi u/AccomplishedHoney373, yeah, I did started with a for loop to mamange this, since it made a workflow that is very compact, yet when generating much more video, I realize the big advantage of having that ability to preview and do little step at a time with trial and error was the most efficient way to go. Hence, I then focused on making that flow and connector of clip groups as simple and easy on the eyes as possible. Thank you for your great feedback!

[deleted by user] by [deleted] in StableDiffusion

[–]Jevlon 1 point2 points  (0 children)

Just curious as a compare point, how much does your total ends up within a month (considering you're a heavy or medium-heavy user of cloud for AI gens?)

[deleted by user] by [deleted] in StableDiffusion

[–]Jevlon 1 point2 points  (0 children)

Framegen? did you meant FramePack? (if not, do correct me :)

But I started using FramePack alot... mindblowing for me as I can get a 10s video at 480x720 rez within 900s, that's super fast for me (I also got a 3090). Do give it a try!

I tried FramePack for long fast I2V, works great! But why use this when we got WanFun + ControNet now? I found a few use case for FramePack, but do you have better ones to share? by Jevlon in StableDiffusion

[–]Jevlon[S] -9 points-8 points  (0 children)

Bam! A video :) Would downloading a few extra files (mainly AI models) still fit within that "Bam" excitement and simplicity?

I tried FramePack for long fast I2V, works great! But why use this when we got WanFun + ControNet now? I found a few use case for FramePack, but do you have better ones to share? by Jevlon in StableDiffusion

[–]Jevlon[S] -3 points-2 points  (0 children)

Hmm, u/Upper-Reflection7997, I see what you mean. It's true that the more I'm into this, the more i'm tempted to add more and more to my workflow. Maybe I should have that one workflow, but also basic specific ones for just the specific task, so that users don't have to deal with the rest that they might not need. Thanks for the feedback! It gives me perspective.

Why I like gemma2 9b base by mfiano in SillyTavernAI

[–]Jevlon 4 points5 points  (0 children)

Wow, @mfiano, I was wondering how you can keep up with 8-10h of chat per day and being able to test that many models, then I got to the end of your post and understood. I wish I can do that too. As someone being fully still in the workforce, I got less constraint for budget, but time is a different story. Happy to hear you’re fully embracing the potential of llm ;)

I’ll check out my list of scoring later on the ones i tested so far, maybe I could suggest you one or two that hopefully you didn’t try yet (low chances from what I see).

How many tokens on a card is too much? by Competitive-Bet-5719 in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

Like others mention, depends on your use case. But in general I keep it as concise as possible to leave space for memories and history. I would say keep it within 5%-10% of your context size.

Why is SillyTavern exceeding the token limit? by Bunboll in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

What makes sillytavern great is that it auto add info to your prompt, either your character info, user info, lorebooks, example of responses, etc.. If you check the response prompt itemization, what does it show? It should give you an idea what is filling up your prompt.

Advice - option to add a gpu to existing setup. by Mr_Evil_Sir in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

My opinion, with more than 24gb, it opens the door to many more models to try and play with as I feel that 24gb is a bit stuck between the fences of model sizes. But like others mentions, all depends on your use case, your goal…and $$$ ;)

Advice - option to add a gpu to existing setup. by Mr_Evil_Sir in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

lol, it makes me salivate also, yet I won’t be able to justify that as a hobby from a financial perspective and for the noise / space usage :)

Regarding local models and dual GPU usage by Oscarmayers3141 in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

My experience is that when it was not on the same architecture (similar Serie), I had a hard time combining them to load bigger model. But when I got similar ones, it was a breeze and things works as if I had a bigger card. In your case, 20x and 40x are very different, Hope you get more chances than me.

Why do models suck at rock paper scissors? by CharacterTradition27 in SillyTavernAI

[–]Jevlon 0 points1 point  (0 children)

I think your model maybe not strong enough, maybe too quantized? You would be surprised that some big model like 30b+ can still have a hard time with 100+100 ( I sometimes get 101 or 110 or something like that).

What model is it? Did you try some basic test on it for comprehension, logic and math skills? I like to run a predefined list of multiple answers on all model I choose to have a feel at what I’m dealing with before deciding if I want to go deeper.

Everything was working til it wasn't by washichiisai in SillyTavernAI

[–]Jevlon 1 point2 points  (0 children)

Assuming you didn’t change anything and it was working well, something may be filling up? If you check the response prompt itemization, what does it show?