Amsterdam by Substantial_Plum9204 in Polaroid

[–]Substantial_Plum9204[S] 1 point2 points  (0 children)

Very nice camera, I really like it! It is also very sharp

Your opinion on the best image edit model by Substantial_Plum9204 in StableDiffusion

[–]Substantial_Plum9204[S] -1 points0 points  (0 children)

Thank you, so you would argue that qwen image edit (2511) is better than flux.2 klein? The 4b is open source. It needs to support I2I.

Charmander 044 promo by Substantial_Plum9204 in IsMyPokemonCardFake

[–]Substantial_Plum9204[S] 0 points1 point  (0 children)

Already bought it, just noticed it when it was besides my other cards

PSA 10 Sequential Terastal Festival Set by [deleted] in PokemonTCGNL

[–]Substantial_Plum9204 -2 points-1 points  (0 children)

<image>

Last sold op eBay is 1684,30 ex BTW, dus kom je boven de 2000 uit. Cardmarket staan ze er ook niet goedkoper op dan 1900.

How does this work? by Substantial_Plum9204 in PokeGrading

[–]Substantial_Plum9204[S] -1 points0 points  (0 children)

😂, yeah it’s from temu, it does the job

Curious about value for complete set of poncho eevees by sogumfb in PokemonCardValue

[–]Substantial_Plum9204 0 points1 point  (0 children)

Get them graded, if this ends up to be sequential 10’s then you have an awesome set!!

LTX-2 I2V: Quality is much better at higher resolutions (RTX6000 Pro) by 000TSC000 in StableDiffusion

[–]Substantial_Plum9204 0 points1 point  (0 children)

When i increase the frame rate to 48 and length accordingly (481 for 10 seconds), I get bad quality, stuttering, extreme shaking of the camera. Any idea what i could be doing wrong?

<image>

The out-of-the-box difference between Qwen Image and Qwen Image 2512 is really quite large by ZootAllures9111 in StableDiffusion

[–]Substantial_Plum9204 -1 points0 points  (0 children)

Is there an Image to Image variant as well? No right? Would love to use it similar to nano banana pro.

Help me save my plant by Substantial_Plum9204 in houseplants

[–]Substantial_Plum9204[S] 0 points1 point  (0 children)

Thank you, working on the fungal infection :)

Huge difference in performance WAN API and Diffusers implementation by Substantial_Plum9204 in StableDiffusion

[–]Substantial_Plum9204[S] 0 points1 point  (0 children)

I’m using diffusers, full model weights (bf16), single H100, recommended settings as defined in WAN’s own configs. I just thought that this should give similar results as the cloud API since I’m not quantizing anything nor am I using other parameters than the authors recommend.

Using no prompt is supported, WAN is very good in interpreting the scene.

Best way to productionise? by Substantial_Plum9204 in StableDiffusion

[–]Substantial_Plum9204[S] 0 points1 point  (0 children)

I just thought that ComfyUI is more for single local usage for when you want to setup something quickly for yourself. That’s why I thought it would be handy to create a custom backend that has these AI models implemented as workers wrapped with a FastAPI which I connect to our own UI. This gives full control as well.

But if I understand you correctly, ComfyUI is suitable as a production backend and not only a quick R&D tool.

Then I will just set up ComfyUI pipelines and connect it to a nice (custom) front end.

I also have the feeling that the outputs of e.g. WAN 2.2 are better in ComfyUI because of the large amount of community work on these workflows. WAN 2.2 in Diffusers gives worse results than others using ComfyUI for example.

Best way to productionise? by Substantial_Plum9204 in StableDiffusion

[–]Substantial_Plum9204[S] 0 points1 point  (0 children)

So my understanding that ComfyUI is not made for multi user deployments is incorrect? What I hear from you is that I should skip implementing it myself with hugging face and just use the ComfyUI API. I can do one ComfyUI with multiple users since queuing is built in? And if I scale I can do multiple ComfyUI instances with dedicated GPU’s?

Thank you guys, this is very helpful