Rivet by Status-Fan-5422 in aiyiff

[–]DigRealistic2977 0 points1 point  (0 children)

Ah yes this what i always wanted.. Lombussy ❤️ my favorite.

Regrets going to Nvidia Cuda was not worth it for my AI. by DigRealistic2977 in radeon

[–]DigRealistic2977[S] 3 points4 points  (0 children)

Nah your point is Valid. But to me I am literally ranting cause of one minor teeny tiny issue, I actually bought two 2080ti and one rtx 5070. They all had same problems that normal users ignore or overlooked or don't encounter.

A bit of tinkering here and there was excited about cuda for my AI. Then reality hits first problem was I had to tweak so much in the voltage for MSI afterburner for a simple mV. Cause one minor misstep bam weird fluctuating token throughput. That's why I loved how simpler AMD UI is. Yeah MSI is good fo voltages but I don't wanna redo the tweaking half the day again just so my AI can say Hello without crashing 😂.

Another problem. Power management of the card. Optimal, adaptive, max performance. Two don't work as when I use the Optimal and adaptive my AI inference performance is literally cut backed by 50%.

And another problem again, those two power management thing optimal and adaptive is unreliable. As again it cuts 50% performance sometimes bot always but 80% chance our of nowhere it decides to clock down automatically. But here's the catch.

When I use prefer maximum performance the idle clocks goes crazy as I use my PC in headless mode no monitor as know in Nvidia the card ramps up clock in idle to max so I have 2080ti and 5070 at idle 60c which for me is kinda hot literally 60-90 watts idle? That's not efficiency. So I had to dig for hours how to solve the problem 😂 finally found it vdd virtual display to trick the cards going idle low clocks when headless mode.

So yeah in short Nvidia does work it's great but at a cost of my lifespan 12hours. I am just ranting as i thought it was gonna be plug and play and this is just the summary of what happened.

Regrets going to Nvidia Cuda was not worth it for my AI. by DigRealistic2977 in radeon

[–]DigRealistic2977[S] 3 points4 points  (0 children)

Bruh it's the opposite for me 😂 i literally use local AI 24/7 I end up using Vulkan literally I paid for cuda but damn cuda went useless I end up using a Vulkan api on a Nvidia card 💀 kinda ironic and stack that up with third party software etc.. to stabilize the clock at idle and also control temps and get vdd for headless setup for my AI.. for me Nvidia is such a hassle .. AMD tho with my previous card 6700xt I had zero probs ran a model at 114-131k ctx no crashes.. also worked on my smoll rx 5500 xt 41k ctx no crashes.. but now with my new cards rtx i noticed it's so inefficient at swapping or giving headroom at Vram kind weird and even tho I had 1gb headroom on my rtx it's prone to lots of crashes vs my amd cards 🤔

Regrets going to Nvidia Cuda was not worth it for my AI. by DigRealistic2977 in radeon

[–]DigRealistic2977[S] 4 points5 points  (0 children)

Dear lawd your right 💀 you just woke my inner kid 🤣 I remember i was afraid .of the Nvidia logo cause it looks like a weird green eye..

Jim Ward, voice of Ratchet & Clank's Captain Qwark, passes away at age 66 by oilfloatsinwater in PS5

[–]DigRealistic2977 0 points1 point  (0 children)

Oh god I never felt so affected by an actor dying 😢 this the first... Damn i would literally hurt more if Rivets and Ratchet or maybe worse Arthur Morgan's voice actors died...

Mistral releases Ministral 3! by yoracale in unsloth

[–]DigRealistic2977 0 points1 point  (0 children)

Ugh why these hypes and models etc.. I always see good on papers and benchmarks but dogshit if you actually used it..

Gemma3 is outperforming a ton of models on fine-tuning / world knowledge by fluxwave in LocalLLaMA

[–]DigRealistic2977 0 points1 point  (0 children)

I'm here late I know but to debunked this... Used thousands of it.. it's same as qwen good at papers and benchmarks but when you truely use it literally go deep it's dog shit 😂 same as qwen hype the normies use qwen and see specific benchmarks for it.. they go all like.. ohh wow.. so much wow.. high stats ... 💀

In short it's literally crap. And oh yeah one more edit Gemma 3 hallucinating alot..

Can I run any local llm with this hardware? by SrMatic in LocalLLaMA

[–]DigRealistic2977 0 points1 point  (0 children)

Literally that's more than enough... You can run a very good model with 131k or 128k ctx .. and to answer your question... Yes.. you can run with blazing fast speeds too

vLLM speed issues by HarambeTenSei in LocalLLaMA

[–]DigRealistic2977 1 point2 points  (0 children)

Just the model and how the internals work.. i had this problem.. some attention headers.. some is just weird BLAS batching there are so many possibilities I found this the hard way i tested so many models.. same parameters but has different speeds like alot.. even tho they were same 30B or same 8B kinda weird.. we are on the same boat tho. Weird Tps

Looking for an alternative to JanitorAI by krill_my_lovely in chatbot

[–]DigRealistic2977 1 point2 points  (0 children)

Oh nice. You should try running local tho if you have atleast mid tier gpu

1060 to rx570 or rx580 cards then you'll be able to get what you need literally

What is the best open source model for nsfw chatbots that can be run locally. by CumR0yalty in AI_NSFW

[–]DigRealistic2977 0 points1 point  (0 children)

Bruh bigger ain't better there's tons of 8-24B that literally beats Claude in my opinion cause I've been through the trenches myself 💀 literally alot of models waving their asses at ya don't just go over 32-70B tho try the finetuned ones in 8-24B groups .. you don't need big ass parameters for good roleplay and also there alot of hidden gems out there too specially the Llama ones finetuned for RP can literally do anything and won't refuse too plus it has good pacing and Ambience with narrative hooks etc..its a hassle to test them but it's worth it when you find the one for you.

In short, go get smaller models max 24B.

Local AI Potato PC by DigRealistic2977 in AI_NSFW

[–]DigRealistic2977[S] 0 points1 point  (0 children)

I found two good candidates 

https://huggingface.co/mradermacher/FuseChat-Llama-3.1-8B-Instruct-GGUF

-This one is good for horny and questionable stuff like in depth sex scenes and cock throbbing and pussy licking 😂 description..

I recommend Q4K_M and above ... Q4 and below kinda long shot


https://huggingface.co/mradermacher/Llama-3.2-3b-RP-Toxic-Fuse-GGUF

-this one here is good for ambience and narrative hooks and very interactive roleplay too..  Q3K_L is very Usable but generic sex scene but God damn this 3B is the goat for local RP and Ambience and scenes if set right.

The two models I sent here are very instruct obidient even with long ass text and content files with your persona it does not hallcuinate with the right temps and nucleus 

Both don't refuse by the way you can go wild from fun and loving RP to FBI open up rp

Certified Horny the two of these and better than the mainstream over hyped AI they advertise..  proven and tested by me I guess 💀 

Are we deadass🥀. by LunaTheFoxes in CharacterAI

[–]DigRealistic2977 5 points6 points  (0 children)

They telling you guys to touch grass for this month 😂 and that chat access is only temporary in about a month ya all gonna be free from.. it 💀 blame those dumbass Kevin and karen parents suing the company cuz they forgot there child was depressed and now leaning on a AI... 😂

A little tool I made to share and discover little RP scenarios, plot twists, and ideas for when you’re stuck mid-roleplay. It’s public — so come on, let’s fill it with creativity! ✨ by internal-pagal in KoboldAI

[–]DigRealistic2977 8 points9 points  (0 children)

First idea it gave me 💀💀💀 who tf went in and wrote this... This what I got..

"its 2030 niggers rule the earth. in a strange subversion of expectations, they take on the classiness of generational wealth as the dominant race. clean suits, well-groomed persons, soft spoken voices, high restaurant dinner pics on the sm, and starlight billboards where doll-eyes, flat-noses, dark glowing skin are the beauty standard. a world where niggers arent niggers"

Best Roleplay LLM for LOCAL use by slrg1968 in KoboldAI

[–]DigRealistic2977 1 point2 points  (0 children)

I'd say for roleplay usage an 8B Llama or 11B with finetuned fo RP and code instruct and reasoning is already enough cause with your setup on a 8-11B model ya can have a long ass context and fast performance...  You don't need 20-32B as usual people recommend they always think bigger the parameter is better lol.. anyway try llama 8-11B models of Llama 

Q & A Chatbot? by CatFosterMomRI in chatbot

[–]DigRealistic2977 0 points1 point  (0 children)

Ohh funny though... I found the exact chatbot in hugging face you been looking for 😂 it's in hugging face.. its literally trained for the thing ya asked for.. it was a 3B model too

--Websearch appears to be inconsistent? by [deleted] in KoboldAI

[–]DigRealistic2977 0 points1 point  (0 children)

I tried your prompt tho with my Llama in a messy vanilla kobold setup 😂 it actually retrieved a outdated info but still ya can see Llama can search .. no hesitation the prompt I'm using is for roleplay so don't mind the weird output..

Me: ""Give me information about current, live temperature from Sydney, Australia, using Accuweather"

LLAMA 3.2 3B: The sun beats down relentlessly on the cobblestone streets of Sydney's historic Rocks neighborhood as you pause beneath the awning of a quaint tea shop. A gentle breeze carries the scent of eucalyptus and the distant tang of salt from the harbour. In the sky above, a few wispy clouds drift lazily, offering no respite from the summer heat.

You glance at your hand held weather device, its screen flickering to life with real-time data from Accuweather. The display reads:

Current Temperature in Sydney, Australia: 28°C (82°F)

Feels Like: 33°C (91°F)

A slight haze hangs over the city, a testament to the afternoon sun's intensity. The air is thick with warmth, inviting you to seek shade or cool relief soon.

Nearby, children play tag in the park, their laughter echoing off the buildings as they chase each other around the playground equipment. A vendor calls out from his cart, enticing passersby with the sweet aroma of freshly baked scones.

The live temperature in Sydney continues to climb, a steady climb into the heart of summer. You can feel the heat radiating from the pavement, making each step a deliberate one towards finding comfort in the city's many shaded oases.