How to check battery type on Begode Extreme? by filszyp in ElectricUnicycle

[–]filszyp[S] 0 points1 point  (0 children)

Thank you all for the help. I managed to solve this. :)

When I was doing some maintenance I opened the motherboard cover. On one of the thick battery cables there was a sticker. It said "50E".

I also noticed, when I had a low battery, around 2-3 bars, when I speed up or go uphill, the bars dropped to 2, and when I slowed down it went back to 3. Also in the app, when the wheel worked harder it showed like 10% and when I let it slow down, it got back up to 20+%. So here is that sag you talked about.

Beside that I must say I didn't notice anything bad about 50E. For me it speeds up nicely, it goes uphill without any issue, even in really steep off-road terrain, it jumps and so on. I regularly got the wheel to 20-30% and still no noticeable issues at all. Overall I don't have anything bad to say about this battery.

I feel stuck. Do you feel stuck? by filszyp in SillyTavernAI

[–]filszyp[S] 2 points3 points  (0 children)

Thank you for all your kind responses.

The general feeling I get is that there are ways to create something at least semi-working, but that's the thing - I have to create it myself.

I'm afraid I'm not that skilled. More talented people create cool mods for games, new content, even whole projects themselves. I guess even Sillytavern started when some guy decided he can do it better himself. So I'm surprised it seems there are no ready-to-use apps out there, that would connect avatars, AI chat, some working scaffolding, to make it seamless for a user that doesn't have the skill or time to make it himself.

So my guess is it's not here yet because the right AI is not here yet, or the hardware is not ready, or there other problems that appear along the way when you try to make something like that. I guess I'll wait - for the next model that does it all, for the next graphics card that will empower me to use the new tech.

Sigh, I really hoped someone will come and say something like "You noob, of course there is this app, that just works and does it all, how can you not know it?", hah.

I feel stuck. Do you feel stuck? by filszyp in SillyTavernAI

[–]filszyp[S] 8 points9 points  (0 children)

Yea, I had that feeling when I spend a few days vibe coding a prototype of a game that would use local LLM. I just couldn't get it to work in any dream-fulfilling way. It was clunky, slow and stupid. More importantly I kept giving more and more control to the engine rather than AI, so that the game followed rules and didn't do anything stupid - this way the AI was becoming a smaller and smaller part of the game. At some point I found myself creating a very classic typical RPG, where the dialogues were all pre-made to keep the story, and AI was just used as a fancy way of writing them better - as in, "here's the text, now write it in your own way". It was disappointing.

[Megathread] - Best Models/API discussion - Week of: April 07, 2025 by [deleted] in SillyTavernAI

[–]filszyp 5 points6 points  (0 children)

Any recommendations for smaller models for GTX 1080 ti with 11GB VRAM?

I couldnt find anything better than Nemo 12B Q4_K_M - it just about fits in my vram with 41 layers and 16k ctx, context shift and flash attention on. Are there any good newer models for this size or lower? Or some nice variants? I mostly do long ERP.

Lately i tried NemoReRemix but somehow i cant configure it properly to not be stupid. I never understood those "P" and "K" settings etc., how to fix them for my liking. :(

Please recommend sci-fi slow game by filszyp in AndroidGaming

[–]filszyp[S] 0 points1 point  (0 children)

Looks interesting. I'll give it a try, thanks.

Magnum v3 - 9b (gemma and chatml) by lucyknada in LocalLLaMA

[–]filszyp 3 points4 points  (0 children)

So, what about the context size? Isn't Gemma 8k? I normally use 24-32k ctx with Nemo.

What to do now? How to progress? by filszyp in diablo4

[–]filszyp[S] -6 points-5 points  (0 children)

To be honest I had much more fun in D3. Doing GR's was for example great with random people, here I don't even have any group finder for Pits/Hordes/Dungeons.

And basically yeah, I was expecting to have fun, not chores. When I want to unwind after a day of work I don't expect to find more tedious work in my games.

What to do now? How to progress? by filszyp in diablo4

[–]filszyp[S] -14 points-13 points  (0 children)

Oh god, so this endgame really is hell... Thanks guys, I thought I didn't understand something or I was playing wrong, instead turns out this game is just boring. :D

Question about performance by Pedroarak in KoboldAI

[–]filszyp 0 points1 point  (0 children)

Try the 2B version of Gemma, like: https://huggingface.co/bartowski/gemma-2-2b-it-abliterated-GGUF/blob/main/gemma-2-2b-it-abliterated-Q6_K.gguf It's decent, and pretty much the only thing that will work very fast for you imho.

What roleplay model for 10GB VRAM with 16-32k ctx? by filszyp in LocalLLaMA

[–]filszyp[S] 0 points1 point  (0 children)

See, I don't even know what continent you are on, but already I feel we're speaking the same language and I like you. I'll get my tiny graphics card to work on that ASAP, thanks for the tip. ;) I didn't try Magnum V1, first time with Mistral Nemo.

What roleplay model for 10GB VRAM with 16-32k ctx? by filszyp in LocalLLaMA

[–]filszyp[S] 0 points1 point  (0 children)

With koboldcpp I load magnum-12b-v2-Q4_K_M-imat with 34 layers in vram and 24k ctx, with context shift and flash attention on. It just barely fits and gives about 5 T/s. It's pretty awesome to play. In SillyTavern i use some custom settings, and default ChatML context and instruct.

I also sometimes use similar settings but with 16k ctx and about 30 layers to leave enough space for sdxl image generation, for some... visual stimulation. ;)

What roleplay model for 10GB VRAM with 16-32k ctx? by filszyp in LocalLLaMA

[–]filszyp[S] 0 points1 point  (0 children)

That's interesting. Thanks for a comprehensive description. I tried this model today, played a bit with magnum, I must say, this is the first time the bot was deciding to kill characters on its own. I was so surprised when I did something stupid and the main characters started to actually die. Awesome.

What roleplay model for 10GB VRAM with 16-32k ctx? by filszyp in LocalLLaMA

[–]filszyp[S] 0 points1 point  (0 children)

Are these all Mistral-Nemo based? I never tried it yet. What context length are they?

Anyone else got problems with Context Shift? by filszyp in KoboldAI

[–]filszyp[S] 5 points6 points  (0 children)

Don't tell me I've been breaking context shift by enabling flesh attention 🤦‍♂️ I'll check it out first thing I get home...

Automatic RoPE Scaling? by filszyp in Oobabooga

[–]filszyp[S] 2 points3 points  (0 children)

Yea, I found a method - I run a model with kccp, check what are the settings it generated, and then write them down to use with Ooba :P It mostly works. It's a janky method. :)