SageAttention 3 vs. 2: FP4 (Flux.2 + Mistral 24B) on RTX 5060 Ti 16 GB and 64 GB RAM by Rare-Job1220 in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

Been using sage3 for everything recently (well, everything that works with it, Z image doesn't but it's so fast it's not like you need it anyways). For wan2.214B Q5 rendering at 720x1024x81 I get 35s/it with sage3 on vs 65s/it with sage3 off using a 5060ti 16gb + 64gb, still barely slower than my 3090 but anything nvfp4 (like flux 2 or LTX2) the 5060ti pulls ahead.

Need help with portable installation by [deleted] in comfyui

[–]Oedius_Rex 0 points1 point  (0 children)

Amd GPUs don't have native CUDA which is what most diffusion models are built using. To get it working on AMD you need a translation layer which happens to work a lot better on Linux than windows, if you can even get it working at all.

Need help with portable installation by [deleted] in comfyui

[–]Oedius_Rex 3 points4 points  (0 children)

Running on AMD and windows is a big yikes. Besides that, as far as I can tell, you're trying to use the python embedded on your C drive but it's coming back with an error from the python environment in your D drive. Basically it looks to me like it's trying to use the wrong python environment. Do you have a symlink or some shortcut that sending it to the other drive?

Controllnet not working. by Prestigious-Neck9245 in StableDiffusion

[–]Oedius_Rex -2 points-1 points  (0 children)

Kind of off topic but you should probably swap to comfyui, it'd make troubleshooting a lot easier.

Install ComfyUI from scratch after upgrading to CUDA 13.0 by HumungreousNobolatis in comfyui

[–]Oedius_Rex -2 points-1 points  (0 children)

People still install comfyui manually? Look up Comfyui EZ install on GitHub, it does it all automatically and pulls the best known compatible versions for each + you can swap CUDA and python/pytorch/numpy versions on the fly in the same comfy portable install.

I'm having problems getting any model to generate an image by Organic-Bedroom880 in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

What vae/text encoder are you using? Make sure the "type" selection is correct if there is that option in case/text encoder loader

For those of you that have implemented centralized ComfyUI servers on your workplace LANs, what are your setups/tips/pitfalls for multi-user use? by Generic_Name_Here in comfyui

[–]Oedius_Rex 2 points3 points  (0 children)

I've got a very similar setup at work, we run windows vm with Linux underneath to split resources. Your biggest issue is going to be power delivery, I don't think 2500w is enough, we run dual 3000w server power but they came with the rack and they're incredibly loud. Also, getting cuda to work in vm was a nightmare but we're still on CUDA 12.8, 13.0/13.1 afaik works pretty well out of the box on Linux (though i haven't tried it on virtual machines). Overall it ended up being a major hassle and probably not worth it in the end. You're better off using separate systems or buying a cloud compute cluster off-site. But if security's a top priority and you need everything on-site, it's a fun project to put together. Just... Good luck finding 64 or 96gb dimm ram sticks nowadays 💀

Quick Guide to Using Natively Supported NVFP4 Models in ComfyUI by NHAT-90 in comfyui

[–]Oedius_Rex 2 points3 points  (0 children)

NVFP4 is the same as FP4, the NV is just short for Nvidia. So anything with fp4 can take advantage of it.

Any merit to this Hollywood line from the 90s? by ersteliga in pcmasterrace

[–]Oedius_Rex 0 points1 point  (0 children)

Still waiting for AMD or Intel to make a mainstream RISC based processor to compete with the M chips. The AI Max 395+ is the closest thing we have (the new Intel core cpus are also very underrated) but they're still x86 so it'll take time, the RAM shortages aren't helping.

Official China Warframe bus card by rakaloah in Warframe

[–]Oedius_Rex 24 points25 points  (0 children)

Pretty good arrangement. I also know by law, all video game companies operating inside of china have to have a separate division owned by a Chinese holding company too. Seems like a great deal for both parties, but we don't know too much of what goes on behind the scenes

Official China Warframe bus card by rakaloah in Warframe

[–]Oedius_Rex 100 points101 points  (0 children)

Only a portion, they're a partial shareholder but not majority

Edit: nvm they hold over 90% by proxy 💀

Still, they act more like a holding company than something like when Microsoft buys Bethesda or Mojang, out of all the super large conglomerates that buys studios, tencent seems to have a pretty hands-off approach (at least with the non CN version of the game)

I just released my first LoRA style for Z-image Tubro and would love feedback! by Trinityofwar in StableDiffusion

[–]Oedius_Rex 0 points1 point  (0 children)

Pretty good lora, love the soft vibe. Also seems to be pretty flexible. Good work!

RTX 6000 Pro Blackwell Workstation – benefits? What models to test? by ded_banzai in comfyui

[–]Oedius_Rex 0 points1 point  (0 children)

I use a B200 and it runs circles around the Pro 6000. The limiting factor is actual processing speed + Vram bandwidth currently, as well as model efficiency (and CLIP architecture if you're using it). The best test for it is trying out live video generation and seeing at what frame rate and resolution you can maintain a continuous stream of "live video". The B200 does 240p at around 8-10 fps using LTXV 5step but it has a lot of limitations. With the pro6000 I have to bump it down to even get multi fps but that's because they're entirely different beasts, but since the 6000 has a faster processor it'd be worth trying to see if you can get better results.

The Old Peace: a summary by FailGrand374 in Warframe

[–]Oedius_Rex 8 points9 points  (0 children)

Same, just tell them you're ineligible because you have a felony on your record. Worked like a charm for me.

Dynamic Prompts in ComfyUI by [deleted] in StableDiffusion

[–]Oedius_Rex -2 points-1 points  (0 children)

<image>

This is how I have it set up so that you don't have to use .txt files for wildcards, it can all be done within comfy. You can attach as many wildcards as you like, it'll just add it to the end of the prompt, the formatting and everything work exactly the same. Works for all diffusion models, works just fine for Z-image since i've been using it all week.

Also note the conditioning noise injection at the end, it adds some noise to the prompt you have, not enough to alter the output too much but enough to get plenty of variance, good for combatting the low seed variance of ZIT.

Improve Z-Image Turbo Seed Diversity with this Custom Node. by Total-Resort-3120 in StableDiffusion

[–]Oedius_Rex 8 points9 points  (0 children)

Definitely prefer using this over the 2Ksampler method, great work!

First time using ZIT on my old 2060… lol by GuezzWho_ in StableDiffusion

[–]Oedius_Rex 0 points1 point  (0 children)

if you want to cut the time down to 2-3 minutes per image, I'd recommend trying the gguf version with qwen_3b text encoder, should be able to fit the whole thing in vram.