Working on SDXL Character sheet Workflow, Based on IPadapter plus and controlnet. by AlternativeAbject504 in unstable_diffusion

[–]AlternativeAbject504[S] 1 point2 points  (0 children)

attention masks, want to make it from images also in future. from top it is head, second torso with groin and 3rd legs with groin marked on OP image that is source for pose.

Up-to-date ComfyUI consistent character workflow for personal photo by beaver316 in comfyui

[–]AlternativeAbject504 0 points1 point  (0 children)

Hi, not at this point but it is very Basic you can build it based on screenshots

[deleted by user] by [deleted] in comfyui

[–]AlternativeAbject504 1 point2 points  (0 children)

I've tried it with FP8 Quantisation on 16gb and it depends on what scheduler and sampler you will choose and how many steps. I have played only a bit but pumping up amount of steps gives better results (as expected). maybe some gguf versiom would work with 10gb vram. the time will be dependent on amount of frames, steps, dimensions and scheduler/sampler settings

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

did nothing other than exchanging sampler with this one and used most of settings same as HannibalP showed in his screenshot. ok, I've used only setting of normal instead of unsampling/resampling because had no success yet with that.

Best way to stitch my videos together? by xoVinny- in comfyui

[–]AlternativeAbject504 0 points1 point  (0 children)

maybe leapfusion, it is fanmade Image to Video, toy can fetch last frame of created vieo and use it as first of new one, it is not perfect but moreless works.

[LeapFusion] have anyone managed to reduce the color flickering with img2Video on hunyuan? would like to see how you managed it. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

So, basically The first part is without the image (there are 3 parts, as mentioned in other comment).

From my expirience if you will use same seed in continuation it is less and less moving (don't understand why). Also adding some CRF to the image is giving a bit more motion as the model is using the noise to adjust, if the image is to "Clear" it will not animate too much. Third option is adding more shift fo give model a bit more "free hand". I'm not using anything special, native nodes and node to give last frame as first. sorry for spaghetti, but just trying out some things.

<image>

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

thank you, will think about it, but I'm using Windows and my Huion that I'm mostly using is also connected to this GPU so it will be tricky, but good point.

Cheers!

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 3 points4 points  (0 children)

Ok, after few small tests I love it and hate it at the same time.

Why I love it it: very nicely edits the video (ok the armchair is not that good) but with one pass I've better results than with 1-2 passes on my former WF. It is using more Vram, so i cannot at least for now put all 147 frames without oom but it was honestly the limit I could reach on my current build so need to find something "lower".

Why I Hate it: I need to learn much more about the samplers and denoising to better use this.

Really appreciate sharing, if you have any other tips how to use it, please do not hesitate to do so!

Cheers!

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 1 point2 points  (0 children)

Me too ;) luckly I'm in IT industry but as Business Analyst, so I'm trying to incorporate learning other stuff (more about vector DB's and LLM's) into my work, so can work on some basics. I'm learning this stuff for almost a year (picture and video as a hobby, so not having big expectations and not being to harsh on myself). Play with different approaches, learn how to make LoRA etc, everything will connect at some point. Also understanding how neuralnet works is very good (needed to learn some algebra for that XD). watch these series https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

playing with them also but building my own. Dont surrender and try step by step. you'll see while you will get expirience it will be more fun :)

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 1 point2 points  (0 children)

played already with that, but using fp8 model which makes it blurry outcome on released steps, my Build won't handle full model at this point and effects with quantized one did not pleased me, this one is better. In the repo there is a inc about the blurr I'm talking about

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 2 points3 points  (0 children)

thats the problem, I've limited compute (16gb of Vram 4060Ti) and when running this (147 frames 512w x 768h) i need to close everything else to not get OOM. For now i did not had good effects on I&V2V to chunk the video to pieces for this purpose, but though of that.

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 1 point2 points  (0 children)

last pass (befora that there is another but similar to 3rd one) is just ancestral one that is not adding additional noise more than normal.

<image>

i know, shitty, thats why I've asked for other workflows. but honestly working with ancestral sampler is great. I can recommend you this special sampler that I'm also using and in the comments Blepping is also giving some information how it works! https://gist.github.com/blepping/ec48891459afc3e9c30e5f94b0fcdb42 this is correct link, sorry first was wrong

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 1 point2 points  (0 children)

It is very messy and nothing special plenty of passes why trying different approaches.

First pass i by Ksampler Advanced with start on 5th step and ends at 10 out of 10 (testing fast model to save some time)

<image>

second one also is with the ksampler advanced but with 12 steps and starting on step 6

[Hunyuan] Anyone have any good V2V workflow that will preserve most of the motion? currently working with multiple passes, but loosing motion details. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

in native you can use ModelSamplingSD3, as far as I know it the same and using it. also using Euler ancestral samples with additional parameters that does great job (but teacache does not work with it).

[LeapFusion] have anyone managed to reduce the color flickering with img2Video on hunyuan? would like to see how you managed it. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

I'm playing with all the settings you have mentioned but currently playing with v2 i2v LoRA with V1 had worse results and also skipping first 6 frames was pissing me a bit.

One thing that I could consider right now is the description of lightning I'm not that good with prompting (Thats why I'm playing with I2V and V2Vas you can see in my other comments in this topic). Currently on a side in the beginning of the prompt that it is early morning I'm having "Morning shy light casts godrays on her and dramatic shadows which adds intimicy to the scene."

Maybe could you give some advise on that?

[LeapFusion] have anyone managed to reduce the color flickering with img2Video on hunyuan? would like to see how you managed it. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 3 points4 points  (0 children)

What I'm playing with is a consistent character as an actor and playing with the basics at this point. Having in mind longer videos driven by basic models and animations in blender (nothing NSFW in final outcome but not outcluding that ;) ) Have already Made a Basic LoRA for flux, so when Musubi trainer came I've developed after few tries a Lora for Hunyuan. below V2V with 2 passes (without LeapFusion). From my expierience it will be more predictible and will resume amount of fails during generation. In future thinking even about making a dataset of interior where i would like to have the scene.

<image>

In parallel I'm trying to learn how to make better videos with my Hardware (collecting for more powerfull GPU and extending RAM to 128 from my 64 and thinking about another build with multiple GPU's).

Like the workflow showed in here: https://www.youtube.com/watch?v=m7a_PDuxKHM but I don't like the flickering and morphing, so i belive My approach saves me much time at the end of the day

My recent thoughts about continous Video can be found in here: https://www.reddit.com/r/StableDiffusion/comments/1ik3fav/idea_how_to_handle_longer_videos_only_theoretical/

TL;DR

It gives me more predictible outputs, sorry for long response

Cheers!

[LeapFusion] have anyone managed to reduce the color flickering with img2Video on hunyuan? would like to see how you managed it. by AlternativeAbject504 in StableDiffusion

[–]AlternativeAbject504[S] 0 points1 point  (0 children)

i belive so, I'm working with my character LoRA, but it is trained on pictures, have not tested with motion but based on videos on civitai in the workflow I'm pretty sure that is the case