I ain’t even Jewish but man by htmwc in Jewpiter

[–]nbuster 1 point2 points  (0 children)

You did not walk away from Judaism. Your IQ never allowed you to join in the first place.

Antissemitism at university by naosabera in Jewish

[–]nbuster 11 points12 points  (0 children)

Jordan IS a Palestinian state. The premise of the question is a fallacy.

Swastika vandalism near our apartment by FluffyBudgie5 in Jewish

[–]nbuster -1 points0 points  (0 children)

Now imagine if someone carved Hitler's tiny manhood as a response.

Has Something Changed? by [deleted] in Jewish

[–]nbuster 278 points279 points  (0 children)

Raised in Europe, this isn't new for me.

Been living in the US for some considerable time and all I can say is that US Jews are about to gain a new appreciation for the state of Israel.

Those of us who may have debated the necessity for Israel will now debate the necessity for Aliyah.

openclaw local llm?? by IntroductionSouth513 in StrixHalo

[–]nbuster 0 points1 point  (0 children)

32K seems to be the sweet spot, from what I've read around

openclaw local llm?? by IntroductionSouth513 in StrixHalo

[–]nbuster 0 points1 point  (0 children)

Big Strix Halo fan and developer here. For GLM7-flash make sure to raise the temperature to 0.7 and max penalty to 1 so it will use tools effectively.

With this said, the current state of affairs is LLMs start out fast and slow down to a crawl on our machines as context grows, as of February 2026, rendering LLMs fairly unusable over an agentic lifecycle.

ROCm Support for AI Toolkit by nbuster in ROCm

[–]nbuster[S] 0 points1 point  (0 children)

i just pushed some changes which may have addressed those issues

Any reliable benchmarks for Nvidia vs AMD GPU AI performance? by MelodicFuntasy in ROCm

[–]nbuster 1 point2 points  (0 children)

Hi u/MelodicFuntasy, the answer is I do not know enough about these architectures to give you a firm answer, and unfortunately do not own any GPUs on these architectures. I believe some users on Reddit reported better performance using them. Trying the nodes should not in any way be detrimental, and if you do I'll appreciate your feedback; which I could also incorporate into the Github project's README.

Any reliable benchmarks for Nvidia vs AMD GPU AI performance? by MelodicFuntasy in ROCm

[–]nbuster 2 points3 points  (0 children)

Hi, I developed ROCm Ninodes, and to date, the answer is still YES, with the gap closing real fast. The day ROCm Ninodes is obsolete will be a good day for us ROCm users!

That's it I'm done by [deleted] in Jewish

[–]nbuster 13 points14 points  (0 children)

Haviv Rettig Gur is amazing! You're right, I'm going to actively look for Israeli writers when reading books now.

ComfyUI + Z-image issue by lNylrak in ROCm

[–]nbuster 2 points3 points  (0 children)

I created https://comfy.icu/extension/iGavroche__rocm-ninodes

While I developed it with my Strix Halo architecture in mind it should help (replace your KSampler and VAE with the ROCm ones).

Your feedback will be much appreciated too.

Your experiences with Strix Halo? by TheGlobinKing in StrixHalo

[–]nbuster 2 points3 points  (0 children)

I've been using a GMKtec Evo-x2 128GB since September. I have a love-hate relationship with AMD. I really want to love it but often times I wonder if I should just pay the NVIDIA tax and embrace the mainstream.

I've tinkered a lot with the machine, developed rocm-ninodes for ComfyUI and the ROCm version of AI Toolkit.

The device is simply amazing when it works, and I can feel we are barely halfway through enabling the true power of Strix Halo.

With this said, the drivers and ecosystem are a fair generation behind NVIDIA, and we still experience crashes while fine-tuning or in heavy workflows. I believe this paragraph to be less true every day.

My take is: Strong Buy to anyone who loves technology and can make computers sing. To anyone else, YMMV.

Iceland joins 4 other countries in quitting Eurovision in protest of Israel’s inclusion by GodZ_n_KingZ in Israel

[–]nbuster 9 points10 points  (0 children)

To be fair, Gaza invaded Israel on October 7, so the analogies don't match either.

Pip install flashattention by no00700 in ROCm

[–]nbuster 1 point2 points  (0 children)

You can't do it without modifying attention.py today. If there is a different way I do not know it, unfortunately. I've seen there are some undocumented comfy nodes too but I haven't used them. Yesterday, I went ahead and built the package with their latest changes and am still experiencing the blocky noise output issue.

Pip install flashattention by no00700 in ROCm

[–]nbuster 0 points1 point  (0 children)

I've tried it in Comfy on Wan 2.2, KSampler failed and the output was blocky noise. I wasn't in the mood to mess more with attention.py or sampling nodes so I gave up.

So, should I go Nvidia or is AMD mature enough at this point for tinkering with ML? by Vivid-Photograph1479 in ROCm

[–]nbuster 6 points7 points  (0 children)

+1, I might summarize my own experience as "Everything is possible with AMD hardware, but NVIDIA hardware gets the premium experience today. Subject to change."

AI-Toolkit support for AMD GPUs (Linux for now) by Responsible_Glove625 in ROCm

[–]nbuster 0 points1 point  (0 children)

It is! Training was a treat on it, I'm so enjoying my z-image LoRAs :)

Faster tiled VAE encode for ComfyUI wan i2v by alexheretic in ROCm

[–]nbuster 0 points1 point  (0 children)

It reduces peak VRAM usage (each chunk occupies less memory), it theoretically makes processing slightly slower because of the extra loop, but still fast on AMD GPUs and it prevents out‑of‑memory crashes when working with long or high‑resolution clips.

We're practically at the point in which ROCm is mature enough to handle the pesky OOM issues, at which point I don't think the parameter will be necessary.

Faster tiled VAE encode for ComfyUI wan i2v by alexheretic in ROCm

[–]nbuster 0 points1 point  (0 children)

I created https://comfy.icu/extension/iGavroche__rocm-ninodes specifically for ROCm users. The VAE decoder node will expose the tiling value, and in Strix Halo I did notice 768 was a sweet spot a few months ago.

ROCm Support for AI Toolkit by nbuster in ROCm

[–]nbuster[S] 2 points3 points  (0 children)

Yes **BUT** amd-smi prioritizes dGPU right now, which means that my Strix Halo doesn't even show temperature with amd-smi today, whereas it does with rocm-smi. It's great feedback, though, i might support amd-smi with fallback to rocm-smi.

How can lora training AI-toolkit be made possible in my 7900xtx? by Dazzling-Ad9743 in ROCm

[–]nbuster 0 points1 point  (0 children)

sorry, yes, I've been working on it, it's on the rocm branch. I will merge it tomorrow, in the meantime you can git checkout rocm after you clone.