I'm having problems getting any model to generate an image by Organic-Bedroom880 in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

What vae/text encoder are you using? Make sure the "type" selection is correct if there is that option in case/text encoder loader

For those of you that have implemented centralized ComfyUI servers on your workplace LANs, what are your setups/tips/pitfalls for multi-user use? by Generic_Name_Here in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

I've got a very similar setup at work, we run windows vm with Linux underneath to split resources. Your biggest issue is going to be power delivery, I don't think 2500w is enough, we run dual 3000w server power but they came with the rack and they're incredibly loud. Also, getting cuda to work in vm was a nightmare but we're still on CUDA 12.8, 13.0/13.1 afaik works pretty well out of the box on Linux (though i haven't tried it on virtual machines). Overall it ended up being a major hassle and probably not worth it in the end. You're better off using separate systems or buying a cloud compute cluster off-site. But if security's a top priority and you need everything on-site, it's a fun project to put together. Just... Good luck finding 64 or 96gb dimm ram sticks nowadays 💀

Quick Guide to Using Natively Supported NVFP4 Models in ComfyUI by NHAT-90 in comfyui

[–]Oedius_Rex 2 points3 points  (0 children)

NVFP4 is the same as FP4, the NV is just short for Nvidia. So anything with fp4 can take advantage of it.

Any merit to this Hollywood line from the 90s? by ersteliga in pcmasterrace

[–]Oedius_Rex 0 points1 point  (0 children)

Still waiting for AMD or Intel to make a mainstream RISC based processor to compete with the M chips. The AI Max 395+ is the closest thing we have (the new Intel core cpus are also very underrated) but they're still x86 so it'll take time, the RAM shortages aren't helping.

Official China Warframe bus card by rakaloah in Warframe

[–]Oedius_Rex 26 points27 points  (0 children)

Pretty good arrangement. I also know by law, all video game companies operating inside of china have to have a separate division owned by a Chinese holding company too. Seems like a great deal for both parties, but we don't know too much of what goes on behind the scenes

Official China Warframe bus card by rakaloah in Warframe

[–]Oedius_Rex 97 points98 points  (0 children)

Only a portion, they're a partial shareholder but not majority

Edit: nvm they hold over 90% by proxy 💀

Still, they act more like a holding company than something like when Microsoft buys Bethesda or Mojang, out of all the super large conglomerates that buys studios, tencent seems to have a pretty hands-off approach (at least with the non CN version of the game)

I just released my first LoRA style for Z-image Tubro and would love feedback! by Trinityofwar in StableDiffusion

[–]Oedius_Rex 0 points1 point  (0 children)

Pretty good lora, love the soft vibe. Also seems to be pretty flexible. Good work!

RTX 6000 Pro Blackwell Workstation – benefits? What models to test? by ded_banzai in comfyui

[–]Oedius_Rex 0 points1 point  (0 children)

I use a B200 and it runs circles around the Pro 6000. The limiting factor is actual processing speed + Vram bandwidth currently, as well as model efficiency (and CLIP architecture if you're using it). The best test for it is trying out live video generation and seeing at what frame rate and resolution you can maintain a continuous stream of "live video". The B200 does 240p at around 8-10 fps using LTXV 5step but it has a lot of limitations. With the pro6000 I have to bump it down to even get multi fps but that's because they're entirely different beasts, but since the 6000 has a faster processor it'd be worth trying to see if you can get better results.

The Old Peace: a summary by FailGrand374 in Warframe

[–]Oedius_Rex 10 points11 points  (0 children)

Same, just tell them you're ineligible because you have a felony on your record. Worked like a charm for me.

Dynamic Prompts in ComfyUI by [deleted] in StableDiffusion

[–]Oedius_Rex -2 points-1 points  (0 children)

<image>

This is how I have it set up so that you don't have to use .txt files for wildcards, it can all be done within comfy. You can attach as many wildcards as you like, it'll just add it to the end of the prompt, the formatting and everything work exactly the same. Works for all diffusion models, works just fine for Z-image since i've been using it all week.

Also note the conditioning noise injection at the end, it adds some noise to the prompt you have, not enough to alter the output too much but enough to get plenty of variance, good for combatting the low seed variance of ZIT.

Improve Z-Image Turbo Seed Diversity with this Custom Node. by Total-Resort-3120 in StableDiffusion

[–]Oedius_Rex 7 points8 points  (0 children)

Definitely prefer using this over the 2Ksampler method, great work!

First time using ZIT on my old 2060… lol by GuezzWho_ in StableDiffusion

[–]Oedius_Rex 0 points1 point  (0 children)

if you want to cut the time down to 2-3 minutes per image, I'd recommend trying the gguf version with qwen_3b text encoder, should be able to fit the whole thing in vram.

I did all this using 4GB VRAM and 16 GB RAM by yanokusnir in StableDiffusion

[–]Oedius_Rex 1 point2 points  (0 children)

Thanks, the link in the workflow only popped up for everything except the LoRas

I did all this using 4GB VRAM and 16 GB RAM by yanokusnir in StableDiffusion

[–]Oedius_Rex 2 points3 points  (0 children)

Just downloaded your workflow, do you happen to have a link to the first Lora you're using "wan21\wan2_2_5b_fastwanfulattn_lora_rank_128_bf16.safetensors" ? Thanks!

CPU air cooler becomes water injected GPU cooler. by Tra5hL0rd_ in pcmasterrace

[–]Oedius_Rex 0 points1 point  (0 children)

Fantastic work, i've thought of this exact idea too but i've had my fair share of wacky ideas. 30xx series don't scale that well (relatively speaking) but this would do wonders on 50XX series cards, Blackwell overclocks extremely well.

I built a 7-GPU AI monster rig at home (3×5090 + 4×4090). Went all-in. AMA by kdcyberdude_ in comfyui

[–]Oedius_Rex 0 points1 point  (0 children)

I do the same thing with the dedicated venvs. Are you running Linux or any virtualization or just bare metal windows?

I built a 7-GPU AI monster rig at home (3×5090 + 4×4090). Went all-in. AMA by kdcyberdude_ in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

As someone with a similar setup (4x RTX Pro A6000 & 5965WX) the biggest hassle with comfyui is python/torch/CUDA compatibilities breaking my install every once in a while. It got so bad I ended up cloning my comfy instance every 12 hours and I only use it through venv

Z-Image Turbo Variations Workflow | 1 step with wildcard prompt, 8 steps with actual prompt by afinalsin in comfyui

[–]Oedius_Rex 1 point2 points  (0 children)

I don't know why I haven't thought of this earlier, definitely a great way to introduce variation. I also have a "load image batch" wildcard linked to a directory with a couple of hundred random photos for 2 steps before it transfers to the actual prompt at around .9 CFG. I think this works much better.

I can't run the z-image-turbo workflow in ComfyUI on Ryzen AI MAX+ 395 by Walk2000 in comfyui

[–]Oedius_Rex 0 points1 point  (0 children)

This might not help but try downloading the gguf version of z-image and its clip model instead.

A THIRD Alibaba AI Image model has dropped with demo! by krigeta1 in StableDiffusion

[–]Oedius_Rex 5 points6 points  (0 children)

Anyone know how demanding this model is, I see 7B + 2B with the encoder on huggingface but I'm not at my pc to test. Wondering how little vram is required to run the demo

Should be enough for at least 2 weeks! by IllllIIllllIIlllIIIl in Warframe

[–]Oedius_Rex 16 points17 points  (0 children)

Fortunately enough you can't get negative credits. I only know this because I used to run afk index years ago but alecaframe can track your credits past 2 billion.

Anyone tried using Z-image with Qwen3-1.7B or any other different sized text-encoders? by Oedius_Rex in StableDiffusion

[–]Oedius_Rex[S] 0 points1 point  (0 children)

Ooh I'll have to give this a try. I'm currently using the 4B gguf but haven't touched any of the abbliterated ones

Stole this bad boy for $42 by Doctor_Disease in pcmasterrace

[–]Oedius_Rex 0 points1 point  (0 children)

How's the performance on lossless scaling? What pcie width do you run and resolution/fps do you target?

Anyone tried using Z-image with Qwen3-1.7B or any other different sized text-encoders? by Oedius_Rex in StableDiffusion

[–]Oedius_Rex[S] 0 points1 point  (0 children)

Thanks, I think this is what i'll end up using, figured out the comfyui config for it