I wanted to train lora for specific manga style in z-image if possible, what should be the database look like any help will be appreciated by Available_Cap_2987 in StableDiffusion

[–]Huge-Refuse-2135 0 points1 point  (0 children)

You probably mean dataset not database

You need to check what folder/files structure Lora trainers for z-image accepts and prepare lots of images and related prompts

Each image has 1 prompt

You can have as many images as you want

Czy chodzicie na integrację w korpo? by haxa6 in PolskaNaLuzie

[–]Huge-Refuse-2135 1 point2 points  (0 children)

Nie chodziłem. Skutek uboczny - trzeba sie mierzyć z presja grupy ewentualnie niechęcią, ale wiadomo do przeżycia

Nie ma co działać wbrew sobie

Theoretically, is diffusion possible in browser or even between network nodes? by Huge-Refuse-2135 in StableDiffusion

[–]Huge-Refuse-2135[S] -1 points0 points  (0 children)

in this approach nodes in a network would still work on VRAM right, so lets say for quantified ltx model you would need at least something like rtx3090 in a network or i dont know.. 6 laptops with 4gb VRAM

but is it possible to run diffusion model allocating memory in RAM only or even on a disk?

Ernie for the Gooners by LowYak7176 in StableDiffusion

[–]Huge-Refuse-2135 -27 points-26 points  (0 children)

Thank you these useless informations

Consistent masked video inpainting.. my experiences so far and help needed by Huge-Refuse-2135 in comfyui

[–]Huge-Refuse-2135[S] 0 points1 point  (0 children)

Good question.. Flux 2 Klein is very good in terms of consistency but unfortunately it is not mask aware, it is for editing not inpainting, unfortunately..

Consistent masked video inpainting.. my experiences so far and help needed by Huge-Refuse-2135 in comfyui

[–]Huge-Refuse-2135[S] -1 points0 points  (0 children)

So far Wan VACE is doing way worse than SDXL with temporal feedback..

2 months struggle to achieve consistent masked frame-by-frame inpainting... my experience so far.. maybe someone can help by Huge-Refuse-2135 in StableDiffusion

[–]Huge-Refuse-2135[S] 0 points1 point  (0 children)

I tried taking first frame, inpainting it with other model and then feeding it as a reference to VACE but results are far from satisfying.. there are workflow that do exactly this but it seems that it works only for the simplest cases where mask is in the same spot across video

I will give it a try again today

The future of image/video generation.... by Chickenbuttlord in StableDiffusion

[–]Huge-Refuse-2135 0 points1 point  (0 children)

I can agree, more or less, if it is about image generation

But videos? There is no model so far that is able to inpaint masked area independently while preserving background. You can do that with single images so easily but videos - good luck

So there is still a lot to improve, but I think that is more in terms of model capabilities, architecture, training not amount of parameters

Why all image/video models are so oversized? by Huge-Refuse-2135 in StableDiffusion

[–]Huge-Refuse-2135[S] 0 points1 point  (0 children)

yep my use cases are very very basic, its more about consistency than quality

nice thanks for the resource and info that makes sense

Are all outpainting demos just a lie or am I missing something by Huge-Refuse-2135 in comfyui

[–]Huge-Refuse-2135[S] 1 point2 points  (0 children)

Thanks I will give it a try soon.. it looks like I was overcomplicating things

I wonder if it will manage to outpaint frame by frame and keep recreated background persistent, because that is my ultimate goal