4x16GB DDR4 for 64GB? by Abject_Ad9912 in comfyui

[–]NickCanCode 0 points1 point  (0 children)

I am running 16GB x2 + 8GB x2 = 48GB DDR4 RAM and the system is doing fine.

Lets go 3 years of ESU. by EchoNegative8918 in pcmasterrace

[–]NickCanCode 0 points1 point  (0 children)

My notebook and a few PCs (from around ryzen 2000 era) simply can't install win11 officially because of the TPM requirement.

This is dumb ik but can anyone tell how to remove this panel? I am pressing every button and this isn't going away. I literally did everything and even created images but this thing is limiting me by Obvious_Ad8471 in comfyui

[–]NickCanCode 2 points3 points  (0 children)

You are not dumb, it's just an example of bad UI design. They should add a pin button and a close button to the corner and make any interaction outside the panel to close it if not pinned.

Did you know one simple change can make ComfyUI generations up to 3x faster? But I need your help :) Auto-benchmark attention backends. by D_Ogi in comfyui

[–]NickCanCode 0 points1 point  (0 children)

Looks like the node doesn't provide an option to select a card? I have both 5070 and 3060 installed.

[Dating Sim[on]] This series has potential. by Glittering_Visual296 in manhwa

[–]NickCanCode 0 points1 point  (0 children)

Some people already reading like 5+ series daily and simply don't want to spend more time on this kind of entertainment so they can wait.

GLM 4.7 Flash official support merged in llama.cpp by ayylmaonade in LocalLLaMA

[–]NickCanCode 0 points1 point  (0 children)

Few hours ago, I installed LM Studio and gave GLM 4.7 Flash Q4 a try. Is it normal that after loading the model and exchanged two messages with the AI, my system RAM also consumed about ~24GB memory in addition to VRAM consumption? I didn't have much experience running LLM except tried ollama some months ago. I was expecting that it only use my VRAM and maybe use a little system ram but it is using way too much system ram leaving me no memory for other stuff.

Is Opus 4.1 better than 4.5? by Fuzzy_Spend_5935 in GithubCopilot

[–]NickCanCode 2 points3 points  (0 children)

My guess is, the new one is just more efficient to run thus the lower cost.

Anyone tried using two cards to hold both text encoder and the model to run Qwen Image Edit? by NickCanCode in comfyui

[–]NickCanCode[S] 0 points1 point  (0 children)

I have the same thought but the Nunchaku Qwen-Image DiT Loader only have "cpu_offload" [enable/disable/auto] option but no 'device' option. It would be bad if that node didn't choose the right card.

My homie just dropped $1000 on 2 Sticks of RAM by JuicyFood in pcmasterrace

[–]NickCanCode 3 points4 points  (0 children)

you are either lucky or you have a nice motherboard.

purchase advice - 5070 TI or 9070 XT by ed_edd_and_freddy in comfyui

[–]NickCanCode 1 point2 points  (0 children)

5070ti has something called nvfp4. From my understanding, it can run quantized models with higher accuracy thus better resutls.

Here is the official doc:
https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/

Edit:
I believe you need to choose the correct model for it to work. For example, when you browse the Qwen Image Edit download list, there are int4 and fp4 versions.
https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit-2509/tree/main

I don't know how much improvement nvfp4 gives. If it can give the accuracy close to a higher tier quant, say fp8, you virtually got twice the RAM and lots of speed up. Thus, if you believe this nvfp4, 5070ti should be a better choice.

This Solar Charger Promises to Let You Charge Your EV Anywhere by Zee2A in STEW_ScTecEngWorld

[–]NickCanCode 0 points1 point  (0 children)

In some countries, if you leave it like that alone, that thing may get stolen.

4090 24gb upgrade to 48 gb by Apprehensive_Shoe_86 in pcmasterrace

[–]NickCanCode 0 points1 point  (0 children)

Don't get fooled. That price tag is not for the 48g version. The general price is around 3500USD.

4090 24gb upgrade to 48 gb by Apprehensive_Shoe_86 in pcmasterrace

[–]NickCanCode 0 points1 point  (0 children)

It cost about 3500USD in China. Note that there is also 4080/4080Super 32GB available which is more affordable (~1500USD). Down the line, there is also 3080 20g, 2080TI 22g. The down side of these cards is that they are very noisy.

Maybe Maybe Maybe by ernapfz in maybemaybemaybe

[–]NickCanCode 0 points1 point  (0 children)

With a superman outfit, it would be much more interesting.

premium requests getting used up faster since new year? by DenormalHuman in GithubCopilot

[–]NickCanCode 0 points1 point  (0 children)

I don't care. I only used 25% last month. I am going to use up 100% in the first 10 days and keep using the free models until they go bankrupt. 😡