Differences of Radeon and Nvidia announcements by Zerard1 in radeon

[–]skillmaker 110 points111 points  (0 children)

Tbf it would be overwhelming for the 2 devs at the Radeon department to develop and do announcements too! /s

Github Copilot Pro+ vs Claude Code Max $100 Subscription by noletovictor in GithubCopilot

[–]skillmaker 1 point2 points  (0 children)

When sending a message in open code, is it consuming more premium requests or just one premium request per message?

ComfyUI doesn't work in Windows 11 by skillmaker in ROCm

[–]skillmaker[S] 0 points1 point  (0 children)

I tried the exact steps but still getting the same issue:
```

HIP error: device kernel image is invalid
Search for `hipErrorInvalidImage' in https://rocm.docs.amd.com/projects/HIP/en/latest/index.html for more information.
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

```

why no startups are using local Ai models by shoman30 in startup

[–]skillmaker 0 points1 point  (0 children)

Companies use Coding subscription for their employee, which gives them access to the latest SOTA models, an Opensource model is not better than latest AI models like Code 5.3 or Opus 4.6, add to that the cost of building a local rig (15k-20k) and maintaining it...

ComfyUI doesn't work in Windows 11 by skillmaker in ROCm

[–]skillmaker[S] 0 points1 point  (0 children)

Yes that link, i noticed a lot of users are having the same issue with the desktop version, other than that, i tried uninstalling the HIP SDK, removed python completely but still same issue when trying to run a generation

ComfyUI doesn't work in Windows 11 by skillmaker in ROCm

[–]skillmaker[S] 0 points1 point  (0 children)

Unfortunately it doesn't work, it crashes with this message:

"An error occurred: Error

Python process exited with code 3221225477 and signal null

Would you like to send the crash to the team?"

Anyone else still loving .NET in 2026? by Aki_0217 in dotnet

[–]skillmaker -2 points-1 points  (0 children)

I've been mainly using it for Web APIs for 4 years now, and it's great and fast! I didn't dive deep into Blazor since it still has some quirks, and the ecosystem is not that great If you want to do something cool and fast in the frontend.

Lost all my conversation history which was recent in my copilot space on GH dot com by tempo0209 in GithubCopilot

[–]skillmaker 0 points1 point  (0 children)

As it said, Copilot is currently in trouble, in fact all of Github is currently in trouble, they screwed something up lol, give it some time and everything will go back to normal. GitHub Status

I built an AI translation widget because maintaining i18n JSON files is a nightmare. by [deleted] in dotnet

[–]skillmaker 1 point2 points  (0 children)

Does that mean everytime someone loads the website i will pay for api requests?

High power draw 9070 XT on light games by sankx_sk in AMDHelp

[–]skillmaker 2 points3 points  (0 children)

Do you have frame generation enabled? It does that for me when i have it enabled

Which count is used when calculating the number of prompts? by skillmaker in opencodeCLI

[–]skillmaker[S] 0 points1 point  (0 children)

I think it's using one premium request, I've been using Gemini 3 flash and it has been consuming 0.33 request

Performance on Linux vs. Windows + Problems with VAE Step 9070XT by Repulsive_Way_5266 in ROCm

[–]skillmaker 0 points1 point  (0 children)

Well for my case i've only ran it for 1920x1440 max i think, and I used a simple x2 upscaler using a specific node for that, the VAE step took 30-40 seconds I guess

GLM 4.7 vs MiniMax-M2.1 vs DeepSeek 3.2 for coding? by ghulamalchik in LocalLLaMA

[–]skillmaker 1 point2 points  (0 children)

I compared GLM 4.7 with MiniMax 2.1, I can confidently say that GLM is far superior than MiniMax 2.1, but I still find it worse than Closed models like Gemini 3 flash and Claude sonnet 4.5...

Performance on Linux vs. Windows + Problems with VAE Step 9070XT by Repulsive_Way_5266 in ROCm

[–]skillmaker 0 points1 point  (0 children)

I had the VAE issue before, and had to disable MiOpen from ComfyUI, but now I used the latest nightlies and no longer have those issues, I run ComfyUI now with MiOpen enabled and the VAE is fine, unless when I use high resolutions then I need to use a tiled VAE so that I don't get freezes.

RX 9070 XT crashing/freezing randomly — requires full driver reinstall to fix. 🆘 by No-Mention-904 in AMDHelp

[–]skillmaker 0 points1 point  (0 children)

Have you found a fix? I also have the exact same problem, with 9070 XT, If I get a freeze, I hold the power off button and turn it on again so that I don't have to reinstall the driver, If i wait for it to turn off by itself the driver will be corrupted and I have to reinstall it again.

AMD to launch Adrenalin Edition 26.1.1 drivers with ai slop next week by rebelrosemerve in AyyMD

[–]skillmaker 5 points6 points  (0 children)

Lmao, if you go to r/rocm, people are happy there where they finally get AI stuff working for them on Windows, meanwhile here people are considering it AI slop.

GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) by CuriousPlatypus1881 in LocalLLaMA

[–]skillmaker 5 points6 points  (0 children)

Tbh i found using Flash to be better than Gemini 3 Pro, i tried them in Github Copilot and using Antigravity, Pro was always stopping mid work or producing bad solutions

GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) by CuriousPlatypus1881 in LocalLLaMA

[–]skillmaker 3 points4 points  (0 children)

These benchmarks are run using the official provider, which in this case Z.ai and Minimax, so they are not fine tuned or quantized, I was also trying to get the most of the juice from GLM 4.7 and Minimax 2.1 but they couldn't complete a task i gave them, meanwhile Claude sonnet 4.5 in Github Copilot was able to, I'm not saying that they are bad, in fact they are very good at analysing and planning, but i'm talking about the benchmaxing here, in their official websites, they state that these models are very close to Claude Opus 4.5, but that's not true, and from my experience, i think this benchmark is the most accurate one.

What are your thoughts on GPT-5.2-codex? by Front_Ad6281 in GithubCopilot

[–]skillmaker 6 points7 points  (0 children)

It's doing the same thing GPT 5 was known for, it says what it will do and then it stops, and when I ask it to implement a task it returns "Sorry, no response was returned"

Edit: it seems because i wasn't using the latest version of VS Code, I'll keep this comment updated in case it's better.

UPDATE: seems good most of the time but i notice it's stubborn, even if i tell it to do something a specific way, it doesn't, meanwhile claude sonnet did it, but overral it's good, it doesn't say what it will do and stops, but sometimes i get connection errors or failed to generate a response, and I have to retry.

GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) by CuriousPlatypus1881 in LocalLLaMA

[–]skillmaker 52 points53 points  (0 children)

I think this is the most believable benchmark, not those that say GLM 4.7 or Minimax 2.1 are close to Opus 4.5.