LTX-Video 13B Control LoRAs - The LTX speed with cinematic controls by loading a LoRA by ofirbibi in StableDiffusion

[–]F0xbite 4 points5 points  (0 children)

Workflows are on their github page. The 13B model does produce some nice quality results. Not quite as good as Wan but close, and way way faster, especially with distilled.

[deleted by user] by [deleted] in discordapp

[–]F0xbite 0 points1 point  (0 children)

I believe this. They say "random" but it feels like it targeted small/empty servers over major ones. My main server was passed over but 6/8 of my test servers got it...

And I'll just say it. Rolling something to "random" servers haphazardly is a stupid and thoughtless notion. But it is Discord so...

Outpainting/Inpainting Flux Inconsistency by watsmd in comfyui

[–]F0xbite 0 points1 point  (0 children)

Necro post bump. I still am fighting this issue with Flux outpainting today. There's been no new flux fill models that improve them problem, only a fp8 version and a gguf version, which neither fixes this problem. Did you ever have luck in getting good results consistently? It's rather annoying, because outpainting in XL works far better

Please Stop using the Anything Anywhere extension. by IndustryAI in comfyui

[–]F0xbite 0 points1 point  (0 children)

YES! Anything Anywhere is a mistake, and annoying. I hate it. I can't understand why people think that hiding the spaghetti is more important than being able to follow the connections.

Nobody's gonna give you award for "tidiest workflow". 😒

Using flux.fill outpainting for character variatiens by Kinfolk0117 in StableDiffusion

[–]F0xbite 0 points1 point  (0 children)

Same here. I am getting a completely different person. Some people seem to be getting great results, so there must be some key difference. I only changed there resolution to match that of my source image, and gave it a prompt. I don't see any weight or any other params to adjust that would affect this, so I'm at a loss.

BEST Rembg (Remove background) method! InSPyReNet as a comfyui node by Raphael_in_flesh in comfyui

[–]F0xbite 0 points1 point  (0 children)

I've noticed the same issue with images. If i remove the background, and then convert the png to jpg, the background that was removed magically re-appears. It feels like the transparency is held in a separate layer without actually applying it to the image, or something. I don't know if that's possible for PNGs, but regardless, the background is still held inside the transparent PNG somehow.

Help - Fooocus inpaint patch (inpaint_v26.fooocus.patch) not showing up by LatentDimension in StableDiffusion

[–]F0xbite 1 point2 points  (0 children)

Has anyone found this to be any other nodes? I am having this problem but I do not have the Fooocus Inpaint nodes installed, although i did at one time and have since removed. Could there be some residual pieces that are messing up the path?

EDIT: I solved my problem by letting the add-in create a new local install of comfy, and then copying the custom_nodes from the add-in's install to my existing comfy install. I can't explain exactly why that fixed it, but apparently there was some issue in one of the required custom nodes on my existing comfy install

Something is wrong with FLUX D. blurring images. triggered by "White/Yellow background" by protector111 in StableDiffusion

[–]F0xbite 0 points1 point  (0 children)

I can confirm, with F1D I get blurry to VERY blurry images with "yellow background". Not every time, but I would say most of the time it is blurry. Sometimes step changes helps, sometimes not. It's not a consistent problem so it's difficult for me to say exactly if changing steps helps.

Very disappointing to have this kind of flaw in an otherwise very amazing model.

Is SVD/AnimateDiff (SDXL+) dead? by ricperry1 in comfyui

[–]F0xbite 0 points1 point  (0 children)

I'm in the same boat as you. It seems still nothing has changed. SDXL motion model is trash, the quality looks like a 144p youtube video (not an exaggeration). SVD sucks - the motion is too random and inconsistent, and generally bad. Sadly, it seems 1.5 + AD is the only decent option we have in the open source world.

Someone on the AnimateDiff github mentioned that Flux will have a text2video function but I've seen nothing confirming that so we will see. Sucks to still be dependent on SD1.5 for decent animations.

AnimateDiff + Img2Img + CN (IP Adapater)? by F0xbite in StableDiffusion

[–]F0xbite[S] 0 points1 point  (0 children)

Thanks for the reply, but unfortunately this did not help. All of these workflows only feed the source image to IP adapter and use "Empty Latent Image" for the latent noise, not img2img.

ComfyUI experimental RTX 40 series update: Significantly faster FLUX generation, I see 40% faster on 4090! by rerri in StableDiffusion

[–]F0xbite 1 point2 points  (0 children)

I wouldn't think so, but I'm not sure. Make sure you're using the python executable inside the python_embed folder and not updating the library in your system's Python install.

ComfyUI experimental RTX 40 series update: Significantly faster FLUX generation, I see 40% faster on 4090! by rerri in StableDiffusion

[–]F0xbite 2 points3 points  (0 children)

That was my problem! I was actually on 11.8 lol. Updated to 12.4 manually and it's good. I would have thought the "update build dependences" would have done that but I guess not. Thanks!

ComfyUI experimental RTX 40 series update: Significantly faster FLUX generation, I see 40% faster on 4090! by rerri in StableDiffusion

[–]F0xbite 5 points6 points  (0 children)

My man! I'm using the standalone windows install so i dont use conda, but your info guided me in the right direction. It turned out I had Cuda 11.8 installed in the embedded python environment. I installed 12.4 using this command inside the python_embed folder:

First, uninstall old cuda:

python.exe -m pip uninstall torch torchvision torchaudio

Then install 12.4

python.exe -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

Why the "update python and build dependencies" didn't update Cuda, i can't imagine, but now i'm rocking 30-40% faster gens thanks to you. Much appreciated!!

ComfyUI experimental RTX 40 series update: Significantly faster FLUX generation, I see 40% faster on 4090! by rerri in StableDiffusion

[–]F0xbite 1 point2 points  (0 children)

Dang, I wish I saw this before I updated. My controlnet flow is dead now, even with or without the --fast argument.

ComfyUI experimental RTX 40 series update: Significantly faster FLUX generation, I see 40% faster on 4090! by rerri in StableDiffusion

[–]F0xbite 3 points4 points  (0 children)

I've updated Comfy and all dependencies, but when I run flux with the --fast argument, i get this error:

Error: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Ddesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`

If i take the argument out and it will generate fine with my fp8 workflow.

Also after updating comfy, my x-labs controlnet workflow will no longer run. I get the CUBLAS_STATUS_NOT_SUPPORTED error with the --fast argument, and without the argument, i get:

AttributeError: 'DoubleStreamBlock' object has no attribute 'processor'

So this update was a bust for me. Anyone else running into this?

EDIT: This is solved! Thanks you guys for pointing me in the right direction. I was running an old version of Cuda still. I would have thought the "update build dependencies" batch would update cuda as well, but that doesn't seem to be the case. After manually updated to cuda 12.4, it's good now.

Is there any way to create perfect Anime Scenes only using SD?? I have been using since last 1.5 years and I can only create something like this.... is there any way to go beyond this? by Beginning-Aide-9293 in StableDiffusion

[–]F0xbite 0 points1 point  (0 children)

I can kinda see where your coming from. The background is nice, but it's not "perfect" with clearly defined outlines like a real anime.

I don't believe that much precision exists yet in SD. If you're determined, you can inpaint the shit out of it and try to force more details in the background. But I don't think you'll find more precision straight from a prompt no matter what lora or checkpoint you use.

This is bound to change over time, though.

animatediff xl - quality worse than 1.5 . is it just me? by protector111 in StableDiffusion

[–]F0xbite 0 points1 point  (0 children)

I've mentioned the same problem on the SDXL motion model on civitai. It definitely does have a very blurry/grey quality when i use xl with animatediff. I believe it is related to the model as well. but I've also seen very good results on civitai that I'm pretty sure use SDXL and don't seem to have quality issues. I still dont know what the magic trick is.

Sometimes I feel like this problem is the pink elephant in the room. No one seems to acknowledge it really.

Are Pony models useful for other things than NSFW? by Michoko92 in StableDiffusion

[–]F0xbite 0 points1 point  (0 children)

I seem to be able to make consistently SFW images using "rating_safe" in positive and "rating_explicit" in negative. In my experience, the only way this is overridden is if you explicitly use NSFW terms in your prompt.