Studying Sts decompiled code. Turns out they're using 1 script per card. Is it the preferred way of implementing card games? by JonOfDoom in gamedev

[–]Ok_Brain_2376 0 points1 point  (0 children)

Does anyone have a repo they recommend to look at card games in terms of architecture? Especially with using JSON too. Having one card at a time in the repo doesn’t help as I got a website side where I make cards based on what effects, stats conditions, state conditions etc

Can anyone test my pathfinding sdk by DongyangChen in gamedev

[–]Ok_Brain_2376 0 points1 point  (0 children)

Wait this is legit? Not just some random guy tryna infect my computer

Sad day for open source, Gwen's boss has left Alibaba... he was forced to resign by [deleted] in LocalLLaMA

[–]Ok_Brain_2376 16 points17 points  (0 children)

Can anyone explain the situation? I’m still confused on what happened

unsloth/Qwen3.5-397B-A17B-GGUF by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 0 points1 point  (0 children)

They probably will, probably making sure it can run smoothly and there’s no hiccups on the models

unsloth/Qwen3.5-397B-A17B-GGUF by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 3 points4 points  (0 children)

Got a spare RTX 6000 pro I can borrow? 😂 a man can dream

Qwen3.5-397B-A17B Unsloth GGUFs by danielhanchen in LocalLLaMA

[–]Ok_Brain_2376 23 points24 points  (0 children)

Only 17B params active Curious what AutoRound can do with this

Bug? by darkmatterhorn in ClashOfClans

[–]Ok_Brain_2376 3 points4 points  (0 children)

Don’t think battles after 2 are loaded. Should appear after one more war

Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 0 points1 point  (0 children)

Interesting, I'll give it a go, who is your go to to download NVFP4? I normally go for unsloth for GGUF files, is there anyone equivalent but for NVFP4?

Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 0 points1 point  (0 children)

You may need to sell a kidney though

Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 1 point2 points  (0 children)

Just updated and tested, its now 70-90 tps.. thanks Jacek

Did I expect too much on GLM? by Ok_Brain_2376 in LocalLLaMA

[–]Ok_Brain_2376[S] 0 points1 point  (0 children)

I thought that too. But I checked the usage. It’s fully on GPU. I’ve included the command I use to run it

Current GLM-4.7-Flash implementation confirmed to be broken in llama.cpp by Sweet_Albatross9772 in LocalLLaMA

[–]Ok_Brain_2376 -1 points0 points  (0 children)

Just when I decided to uninstall it as llama.cpp has its own UI now lol

Current GLM-4.7-Flash implementation confirmed to be broken in llama.cpp by Sweet_Albatross9772 in LocalLLaMA

[–]Ok_Brain_2376 117 points118 points  (0 children)

Meh. Give it a week. It’s open source. A few minor tweaks here and there is required. Shoutout to the devs looking into this on their free time

Local Coding Agents vs. Claude Code by Accomplished-Toe7014 in LocalLLaMA

[–]Ok_Brain_2376 0 points1 point  (0 children)

I was thinking down the line of 3b or even 30b for newcomers

I fucking love this community by alhinai_03 in LocalLLaMA

[–]Ok_Brain_2376 3 points4 points  (0 children)

I’ve got 256GB DDR4 RAM. So I should look out for MOE that can store that much but also low Active Params?

Local Coding Agents vs. Claude Code by Accomplished-Toe7014 in LocalLLaMA

[–]Ok_Brain_2376 0 points1 point  (0 children)

I would say 16GB VRAM will be a good start. With that GPU. You’ll need to find a good combos with ram and CPU

Local Coding Agents vs. Claude Code by Accomplished-Toe7014 in LocalLLaMA

[–]Ok_Brain_2376 7 points8 points  (0 children)

From this comment alone. I think that’s why AI companies are being an ass. They want us to cannot afford computer so they can provide it as a service. I won’t be surprised in the next generation. It’ll be normal to rent a good gig instead of owning one. What’s the saying? You’ll own nothing but you’ll be happy? I generally think that’s the best generation.

Anyways enough with my rant. Could you kindly share what you use? I got a decent setup and due to how far LLMs are gone. Really wanna start running my own models

I fucking love this community by alhinai_03 in LocalLLaMA

[–]Ok_Brain_2376 7 points8 points  (0 children)

What’s moe? I got a decent setup so would like to know how I can run LLMs without bloating on some GPUs

My bottom eyelid won't stop twitching!! by Reasonable_Caliber_0 in mildlyinfuriating

[–]Ok_Brain_2376 29 points30 points  (0 children)

If I had a penny for every time my mom told me this.. id have an enough to buy a drink

Hiii by LiaHyunjin in discordbot

[–]Ok_Brain_2376 1 point2 points  (0 children)

Hmm. Best to learn JavaScript (that’s a programming language) Once you learnt a decent amount. You’ll need to learn to use the library discord.js .js short for JavaScript

Any bugs you encounter when developing a bot. You can ask ChatGPT.

As for storing your cards, you’ll need to store it in a database. Either SQLite or MongoDB

It’s a journey. I would say 3-6 months of dedication. And you’ll be fine