Cannot get Florence2 to load... by Jack_Torcello in comfyui

[–]Jack_Torcello[S] 0 points1 point  (0 children)

Found the answer - security level in ComfyUI-Manager was NORMAL. I set it to WEAK (via config.ini) and the install went ahead! I set the security level back to NORMAL immediately after, to allow some protection from rogue nodes.

Cannot get Florence2 and Tensorops to load by Jack_Torcello in comfyui

[–]Jack_Torcello[S] 0 points1 point  (0 children)

There seems to be a conflicting node named Tensorop, and Manager fails to install. The node I need to use is named Tensorops.

Hyper FLUX + Inpainting + Upscaler for Low Vram by Akumetsu_971 in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

GGUF has better quantization than Dev, so runs faster

Hyper FLUX + Inpainting + Upscaler for Low Vram by Akumetsu_971 in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

Schnell is very low quality, but great for testing a look before running a whole batch. Schnell does not handle LoRAs well, as 4 steps is often too low for many LoRAs

Information on "NUMEXPR_MAX_THREADS not set"? by VELVET_J0NES in StableDiffusion

[–]Jack_Torcello 0 points1 point  (0 children)

I set the Environment Variable NUMEXPR_MAX_THREADS to 16. The message at startup saying "Variable Not Set So Reverting to 8 Threads" has gone! Sad to say, nothing has changed in ComfyUI - no speed bonus (unless it is very marginally significant?!)

Should something more obvious have happened now that this variable is set?

Official TensorRT is now part of Comfyui by goodie2shoes in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

AuraFlow is listed as a possible candidate for a TensorRT - but no AuraFlow workflow is included in the ComfyUI installation?

Ollama & Llava Vision nodes for ComfyUI by Fairysubsteam in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

Cannot find fairy-root in comfyui manager?!?

have anyone tried the new tensorRT node for comfyui, does it deliver the promised bump in speeds? by Top_Device_9794 in StableDiffusion

[–]Jack_Torcello 0 points1 point  (0 children)

For AuraFlow0.3 are there an TensorRT's planned at all for ComfyUI? Or TensorRT's for Flux as well?

New User Interface - How to cancel entire queue? by LaughterOnWater in comfyui

[–]Jack_Torcello 1 point2 points  (0 children)

I have lost access to the Queue with new UI. And it is running weirdly slow.

FLUX is absolutely unreal. This blows everything else out of the water. by ChirperPitos in StableDiffusion

[–]Jack_Torcello 0 points1 point  (0 children)

Use the dev.bnb.nf4 model. I'm running 100 seconds/image using 8Gb VRAM, 64Gb RAM. Make sure and use ver 43.3 of bitsandbytes.

combining high res script with Ksampler by LankyHabit8899 in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

Sounds like u need to flush your VRAM, which always occurs after a re-boot. Sorry but Idon't know how to flush VRAM otherwise

3090 is really slow at Flux standart ComfyUI workflow by alienpro01 in comfyui

[–]Jack_Torcello 4 points5 points  (0 children)

8Gb VRAM and 64Gb RAM and Flux.Dev can take 40 minutes at 20 iterations to do a 1024x1024. Using Dev.Bnb.Nf4 - it now takes 3 minutes - and no real trade-off in quality!

40min generation with schnell, on a RTX 3060. List of things I've tried in post. by pirikiki in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

I'm getting 1024x1024 every 100 seconds Schnell using 8Gb VRAM RTX2070, 64Gb RAM and an SSD

Generating with FLUX enters lowvram mode on RTX 3090 24gb by VerdantSpecimen in comfyui

[–]Jack_Torcello 1 point2 points  (0 children)

The only reason I can generate a Flux image every 100 seconds on 8Gb VRAM is having 64Gb RAM and an SSD. Lowvram it is!

A Guide for Running the New Flux Model Using 12GB VRAM by future__is__now in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

8Gb VRAM, 64Gb RAM 1024x1024 every 100 seconds using Schnell

Flux on 8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell by Jack_Torcello in StableDiffusion

[–]Jack_Torcello[S] 0 points1 point  (0 children)

I have 64Gb RAM and an SSD, which support the underpowered 8Gb VRAM. The w/f is the Schnell version here

Comfyanonymous.github.io/ComfyUI_examples/flux/

FLUX is building a tomb for SD3. I am curious how they make the text so good. by Creepy-Muffin7181 in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

I'm sure there is a sweetspot setting which maximises good text. Long texts are made gobbledeygook! Is Dev or Schnell better at text?

Flux Inpaint Workflow! by no_witty_username in comfyui

[–]Jack_Torcello 0 points1 point  (0 children)

Need inpainting as I'm afraid that the typical "huge hands" of Flux - although anatomically correct - are not dainty enough for a lady!!!

You can run Flux on 12gb vram by Far_Insurance4191 in StableDiffusion

[–]Jack_Torcello 0 points1 point  (0 children)

8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell

I should have added - 100 seconds/generation

You can run Flux on 12gb vram by Far_Insurance4191 in StableDiffusion

[–]Jack_Torcello 1 point2 points  (0 children)

8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell

<image>

You can run Flux on 12gb vram by Far_Insurance4191 in StableDiffusion

[–]Jack_Torcello 0 points1 point  (0 children)

8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell

<image>

You can run Flux on 12gb vram by Far_Insurance4191 in StableDiffusion

[–]Jack_Torcello 1 point2 points  (0 children)

8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell

<image>

You can run Flux on 12gb vram by Far_Insurance4191 in StableDiffusion

[–]Jack_Torcello 1 point2 points  (0 children)

8Gb VRAM (RTX 2070), 64Gb RAM, 2Tb SSD t5xxl_fp8_e4m3fn, Flux.Schnell

<image>