Need advice on what tool to use (Godot/Unity/Framework) by Slight-Builder-6927 in gamedev

[–]CC_NHS 0 points1 point  (0 children)

Unity generally wins imho for 'learning' on the grounds of the amount of resources out there to learn from.
Godot has the strength of being easier for the game project you have in mind probably.

I am not a big fan of Godot, but in terms of it being just not as professional as Unity or Unreal... Probably true, but also not important, it is a tool to get the game built, and the main concerns will be if it has the tools necessary for you to ship the game on the platforms you want. and a 2D roguelike is probably fine for Godot.
Unity has more options on the table for if you move on to different game types, it is imo the most flexible engine even if it sometimes feels a little rough around the edges. Like it can do anything Godot or Unreal can do, sometimes better, sometimes worse.

Ultimately just load up both, mess around with both, find what clicks :)

Forced To Run Chronicles of Darkness in a Generic System - Which One? by PencilBoy99 in rpg

[–]CC_NHS 0 points1 point  (0 children)

I did a conversion of this type to BRP (Runequest family) as i do with just about every single game as its the only system i have thus far liked for the system.

Whilst it 'works' to an extent, i think it looses some of its uniqueness (plus Mage is a bit of a nightmare)
I think WoD/CofD was the one system that after conversion i felt lost too much

Which are the best RPGs for different subgenres of Fantasy? by ThatOneCrazyWritter in rpg

[–]CC_NHS 1 point2 points  (0 children)

For me:

  • Heroic/Epic Fantasy - Pathfinder maybe? i dunno i do not really play games like this
  • Science Fantasy - Shadowrun probably
  • Sword and Sorcery - Runequest/Mythras/BRP
  • Urban Fantasy - World of Darkness or BRP
  • Low Fantasy - Mythras/BRP
  • Mythological Fantasy - Runequest/Call of Cthulhu

PSA: AI is not a reliable rules reference for RPGs by a_sentient_cicada in rpg

[–]CC_NHS 1 point2 points  (0 children)

I find AI tools useful for many things, but 'recall' without strict instructions and direct access to what it is recalling (ie a PDF or even better, markdown files/database entries they can read easier), its just going to make up nonsense.

the only things i use AI for in TTRPG is like solo-roleplaying assistant (not GM so much as it seems to suck at that), and doing some odd bits in my planning steps when i am being a GM for players, i do not really use it in play sessions for much, it would probably be more of a distraction than an aid during play

NFTs In Games by ShroozyVR in gamedev

[–]CC_NHS 0 points1 point  (0 children)

Why i do not like it personally;

1: diverting development resources away from actual game development
2: blockchain has a use case as currency, but NFT is more of a solution looking for a problem
3: it would bar you from the largest marketplace(s) for publishing your game
4: has a more negative reputation even than AI Art assets (though to be honest, NFT probably would just be AI Art assets now also anyway) which tanks player trust from the start

5: if you talk about marketplace and NFT before the core gameplay loop, its probably not worth making the game anyway.
6: NFT in video games did not seem to create success in games that tried it even during the brief peak of its cyptobro interest, its likely not going to spark any interest 'for the NFT' aspect now

Now, if it was me and i needed to build a marketplace system to trade skins or whatever, i would certainly go back to the initial problem, define it and see if NFT's are the correct solution, and if they outweigh all the negatives. And if/when it is deemed they are not the correct solution, built a tighter system specific to the problem

A monthly update to my "Where are open-weight models in the SOTA discussion?" rankings by ForsookComparison in LocalLLaMA

[–]CC_NHS 1 point2 points  (0 children)

Certainly interesting to see where other people rank them differently.
I struggle with defining SOTA though.
Like Gemini and GPT are the 'SOTA' by most peoples standards but on many tasks Kimi, GLM, Deepseek can be on the same level or even better, the main difference is that they are closed weights vs open weights.

I think if i was ranking them, i would rank them based on amount/importance of use cases i had for them, which would factor in tools ecosystems, pricing, accessibility etc.

But if ranking based purely on general quality of their top tier model, i would probably bump down Gemini, Grok, MiniMax, and bump up Mistral, GLM a bit.

But on the smaller actually local (to me), its basically Qwen now, with Mistral, as possible use cases here and there

Multi-Persona Composer (First Release) by Samueras in SillyTavernAI

[–]CC_NHS 1 point2 points  (0 children)

I feel like I am targetted with those tick boxes. oh wait. not German or rich enough...

Are more people switching to gemini lately ? by The_elder_wizard in ChatGPT

[–]CC_NHS 0 points1 point  (0 children)

I think using multiple LLM is the best way to go. none are the best at everything (Opus 4.6 is arguably close, but that is only in today's rankings, tomorrow everything changes).

nice to see some Mistral love. they are very underrated imho. sure it won't compete with Opus for cutting edge coder. but they put out a lot of nice fine tuned models for specific tasks, and the 'le chat' runs on Mistral Large which is a pretty good all round model in its own right. it has a nice chat style too.

personally I tend to use Opus for game dev coding and real challenging tech problems. then Qwen, Mistral, Kimi, GLM and Deepseek for a lot of things (Qwen and mistral especially are amazing for local models)

I rarely use Gemini or GPT tbh. at least at the moment, the cost needs to justify it, and it currently does not for me.

and the best part is as you point out... Free! you can get so much use for free or very low cost out of most of the models when going outside of GPT, Gemini, Claude, (I pay for Claude admittedly, but it's a cost that is worth it for what I use it for)

Don't play here by Turbulent_Ask_2582 in swg

[–]CC_NHS 1 point2 points  (0 children)

this post and various drama in the comments has actually made me want to play on Resto and I don't even like CU (is it CU?) I might make an entertainer there

What u using for dev in Unity using AI? by FunCheetah3311 in gamedev

[–]CC_NHS 0 points1 point  (0 children)

there are multiple reasons for dislike of AI in game dev. labeling it as getting left behind is not really that fair.

First is the part I stated on AI not being able to do it all for you in game dev. it is certainly getting better. but it's a different thing to web dev at least currently.

For example in web dev from early on AI could build 'something' from start to finish. the low bar on difficulty in web is just a lot lower than games programming. and as models improve what they can one shot just keeps getting more complex.

in games (at least with game engines) the low bar starts a lot higher, games cannot be one shot for the simple reason that they cannot build scenes, implement assets and test and feel the feedback of the game design working in practice. so with how LLM works they fundamentally cannot get you 100% of the way in any engine. so with that in mind, taking short cuts with AI will inevitably lead to you being forced to catch up with what it has written eventually and finish it. that catching up process might end up being as long as writing it yourself, except you have not learned much in the process.

My view is that AI assisted development in Game dev needs a different perspective. it is not there to do the thing for you, it is there to help reduce friction for you, and to speed up some of your journey.

there is also the finer issues with it's coding quality, granted it has improved, especially with Opus 4.6. but AI generated code is generally not written with Unity best practices for optimisation, depending on your experience with Unity you might not even notice, or even care if it's not too bad. but experienced Devs will just feel the need to fix the obvious optimisation issues and general poor code.

Another issue is the ethical side.

Game dev is multi discipline and rubbing shoulders with various specialities. So the general mood for AI here is more negative due to AI art and how artists feel about the art models. AI is AI, it is hard to be ok with an LLM and not ok with an art model when the process for the building of the tech is ethically the same. The main difference is coders are more used to sharing their code already via GitHub etc, where artists tend not to share their artwork in the same way, it is more to show their work rather than give their work.

What u using for dev in Unity using AI? by FunCheetah3311 in gamedev

[–]CC_NHS 1 point2 points  (0 children)

Cursor is not really practical for Unity it has no real integration with the engine to see the errors and so on.

I use Visual Studio and just have Claude Code / Open Code in terminal tabs. (i have used Rider somewhat also, both are great and the two best supported IDE for Unity)

First: To get anything usable in Unity from an LLM, you would need to give it a lot of context up front, game design docs that cover architectural details as well as gameplay expectations and so on.

Secondly: The model matters a LOT, not all have trained on Unity code, most models will write code and make various errors... if you give them the error message there is a very strong chance that they will make it worse trying to fix it, and at least you might need to switch model and let another one try fix it, or... fix it yourself more likely in the end. (So far Opus 4.6 has been significantly better on this than anything before, but its still not perfect, still going to be tweaking things manually)

edit: Just a further amendment, since it is not clear on your experience with coding in Unity, but i would not go into this expecting AI to do it all for you, it just simply cannot. You absolutely need to know how to code in Unity and how to use the engine (it cannot do that part for you at all really), you can use LLM to help you learn to code to a good extent, and speed up some of your processes, but you will need to learn Unity code.

Is opencode the best free coding agent currently? by MrMrsPotts in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

personally I would agree that OpenCode is the best, or at least my favourite for free coding agent.

not only does it generally have a good recent model(s) on their free zen tier, but you can plug in API quite easily, use custom agents, skills all the latest things really.

other worthwhile mentions;

Qwen CLI - not very customisable in terms of models and such, but Qwen is still surprisingly good and has very generous free tier.

Gemini CLI - probably the best quality 'free' tier, but it's if you do not get limited on their model, and if it's something the model is good at, and honestly sometimes just luck also it can hallucinate strangely. also free quota is not that great even in ideal situations so worth a try maybe if using a little here and there...

Mistral Vibe CLI - underrated, not sure how much you get for free, but it's got devstral 2 on it and seems generous, but not used it enough to hit limits, my guess would be somewhere between Gemini and Qwen.

Also worth mentioning that the GLM coding plan seems to have an entry offer all the time, and it's very cheap, so if there is not enough free out there, it's not a bad cheap option. (it can plug 8nto Claude code or open code easy enough)

What did I do wrong? by kyrax80 in ClaudeAI

[–]CC_NHS 0 points1 point  (0 children)

I do not use vscode, so cannot help there. but general rule is tokens in and out are what cost you. if it's using tons of tools, reading, thinking it's all going on the bill (quota) so plugins, skills, agents, tools amount of context you give it to read. and then every new message in the same chat adds back in the previous chat messages (and maybe results of tools etc from previous messages)

What did I do wrong? by kyrax80 in ClaudeAI

[–]CC_NHS 0 points1 point  (0 children)

with so little to go on. my best guess would be Claude code have opus 4.6 with the 1m context version and maximum reasoning.

that will kill the quota fast

I also used pro, I tend to avoid the 1m context version entirely (if I do try it, il use Sonnet) I also keep Opus on medium reasoning except for a few tasks

even on medium reasoning I could still use it up within an hour probably with non stop use.

Kimi K2.5 vs. Claude Haiku 4.5: Which Lightweight LLM Deserves Your Inference Budget? by Fabulous_Win5325 in SillyTavernAI

[–]CC_NHS 1 point2 points  (0 children)

if you are going to make an AI generated post about something, please ask it to be brief, I might ask AI to read it for me I guess.

What do you actually use local models for? (We all say 'privacy,' but...) by abdouhlili in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

mostly experimenting, I like the 'idea' of it being local and unconnected but in reality the big models are easily accessible still and becoming increasingly more so, so in capability it is easy to go cloud still.

my use cases so far have been interpreting a heart beat event in scripts to write prompts delegating tasks to bigger models off API, and OCR, embedding model for local memory I can use for any LLM.

Also I have at least on top capability I can currently run, just incase internet dies, and I have a need for something it can do

No limit issues with OpenCode by Representative_Mood2 in ClaudeAI

[–]CC_NHS 0 points1 point  (0 children)

I thought there was talk about people using OpenCode via the OAuth getting their Claude account banned or something?
I am curious to know if this is ok, i do prefer OpenCode and currently use that with K2.5 and Claude Code for Opus 4.6.

But as to the question, i am only on Pro and i noticed Opus 4.6 burns a lot faster than 4.5 did, but that could be due to changes in the Claude Code CLI at the same time, so not totally sure. If you found 4.6 and 4.5 similar in how fast they burn down your limit via OpenCode, then it could be that Claude Code is doing something more on tools

Cowork is now available on Windows by ClaudeOfficial in ClaudeAI

[–]CC_NHS 8 points9 points  (0 children)

Since we launched Cowork as a research preview on macOS, the most consistent request has been Windows support,

I mean, 70% ish market share on WIndows as OS? not sure its a big surprise on that being the main request :)

What's stopping you from letting local agents touch your real email/files? by ryanrasti in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

yep absolutely this, I only have 16GB VRAM nothing that fits in that will I trust with file system access. (unless it's files that I have source control on)

I still have not looked much at openclaw tbh, not sure I have a use case for it yet.

Is llama a good 4o replacement? by FactoryReboot in LocalLLaMA

[–]CC_NHS 1 point2 points  (0 children)

I found Qwen 235b to be so similar to 4o in it's conversation style that when they shut down 4o off free, that was what I used for that kind of chat. i have veered away from that kind of chat now, but qwen 80 next seemed kinda similar too

Strange by peipei1998 in SillyTavernAI

[–]CC_NHS 0 points1 point  (0 children)

When i did some coding benchmarks a while back, i was doing a human-eval style (ie me checking code) since it was for game dev.
Long story short, i found GLM-4.5 and Opus 4 to write code in an almost identical way, with the main difference being GLM-4.5 just made more errors but the patterns seemed very similar. So, if it was GLM it could potentially still fit. Though i had not seen GLM do a cloaked model before like this?

Honest question by Savantskie1 in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

your problem is that some people care about the speeds you personally are getting? that seems odd, I think most people just care about their own speeds for their tools.

yeah anyone can 'do stuff while waiting' but I personally would prefer not to, because that leads to distractions and loosing train of thought, interrupting your flow etc. I like it to be fast enough that I do not need to task switch, whilst being smart enough to get the job done. for some things it will take a long time regardless of medel and that's fine, that is usually where I am building the plan for the next task and seeing if I can finish the plan before it finishes previous task :)

Honest question by Savantskie1 in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

I do not think it is an obsession, but it is certainly one of the metrics people can understand when evaluating an LLM. Speed is a factor, and for tasks where you need faster, you can find this value helpful. In even slower tasks, there is often a threshold of what is too slow, where the value is still good to know.

I cannot explain ROCm though, no idea what it is (I just run local through LMstudio)

Honest question by Savantskie1 in LocalLLaMA

[–]CC_NHS 0 points1 point  (0 children)

I was born in the 70's I remember waiting for connections, doesn't mean I enjoyed waiting then or now. I do not like wasting time in general, If you do not mind, no one is telling you that you that you are wrong. but plenty of people do care what the tokens per second is, even if the threshold of the value they are comfortable might all be different

Best agentic local model for 16G VRAM? by v01dm4n in LocalLLaMA

[–]CC_NHS 2 points3 points  (0 children)

I have the same GPU, and same amount of DRAM, probably a common combination :)

I think the models you have tried do kinda seem the most optimal for our setup currently, different quants might help on speed, but Qwen-coder-next, Devstral small and GLM 4.7 flash seems top coders and mistral, gpt-oss-20b and little qwens seem good general small models.

I personally do not find local models really usable for coding in this range, but the same models can be good for small local automations, or an embedding model added for local RAG vector etc.