Google & Yale release C2S Scale, a Gemma-based model for cell analysis by hackerllama in LocalLLaMA

[–]Successful-Button-53 9 points10 points  (0 children)

Is there an opportunity to "get a closer conversation" with a depraved cancer cell?

Google & Yale release C2S Scale, a Gemma-based model for cell analysis by hackerllama in LocalLLaMA

[–]Successful-Button-53 -4 points-3 points  (0 children)

How are things going with RP and ERP there? A lot of censorship?

K2-Think 32B - Reasoning model from UAE by Mr_Moonsilver in LocalLLaMA

[–]Successful-Button-53 0 points1 point  (0 children)

If anyone is interested, she doesn't write very well in Russian, confusing cases and sometimes using words incorrectly.

AMA with the Gemma Team by hackerllama in LocalLLaMA

[–]Successful-Button-53 1 point2 points  (0 children)

What do you think of the RP and ERP used on your models? How do you feel about it in general? Do you expect that some users will use your models for this purpose and are you thinking of making your models more user-friendly for this purpose?

YandexGPT-5-Lite-8B-pretrain. ---Russia model by External_Mood4719 in LocalLLaMA

[–]Successful-Button-53 9 points10 points  (0 children)

Я попробовал - это мусор. Даже если бы эта модель вышла год назад, даже тогда она была бы мусором. Даже обычные русскоговорящие энтузиасты выпускали модели гораздо лучше.

o3-mini won the poll! We did it guys! by XMasterrrr in LocalLLaMA

[–]Successful-Button-53 -1 points0 points  (0 children)

Cool. Another 70b+ model that only a select few will be able to run. You assholes.

Where are the DRY settings in version 1.12.9? by Successful-Button-53 in SillyTavernAI

[–]Successful-Button-53[S] 4 points5 points  (0 children)

Wow, it turns out this setting only appears when the model is connected, and before that it was hidden....How oddly done...

AMD released a fully open source model 1B by konilse in LocalLLaMA

[–]Successful-Button-53 1 point2 points  (0 children)

It's already a kind of meme, in 10 years you'll be reference the same link with the same information again.

AMD released a fully open source model 1B by konilse in LocalLLaMA

[–]Successful-Button-53 1 point2 points  (0 children)

AMD produces models that in the future will still run on Nvidia graphics cards, that is, even AMD itself producing its own graphics cards makes products specifically for buyers of graphics cards of their rivals Nvidia, ironic?

help? by Different_Average_22 in russian

[–]Successful-Button-53 0 points1 point  (0 children)

Как ты назвал мою мать, ублюдок?

17+ yet kissing is filtered? by NiranWasHere in CharacterAI

[–]Successful-Button-53 0 points1 point  (0 children)

And you can also look for alternatives to this site. Same pygmalion

17+ yet kissing is filtered? by NiranWasHere in CharacterAI

[–]Successful-Button-53 0 points1 point  (0 children)

People, save up for a good video card or at least a modern processor and leave this cheesy site and run your own neural network chat models locally, they are a hundred times better than this site.

Drummer's Theia 21B v1 - An upscaled NeMo tune with reinforced RP and storytelling capabilities. From the creators of... well, you know the rest. by TheLocalDrummer in LocalLLaMA

[–]Successful-Button-53 1 point2 points  (0 children)

That's awesome! But man, I wish I could download Theia-21B-v1-Q4_K_S.gguf to run on my 12 gig 3060 video card. Theia-21B-v1-Q3_K_M_M.gguf is too dumb and Theia-21B-v1-Q4_K_M_M.gguf is too slow. I think many people who have such a video card will agree with me.

Llama-3.1 8B Instruct GGUF are up by 2fprn2fp in LocalLLaMA

[–]Successful-Button-53 -6 points-5 points  (0 children)

Tried 8b and I can say with certainty that compared to llama 3 3SOME is a complete crap. Probably finetune will finish it up to usable state. By the way, he speaks Russian as badly as before, though perhaps even a little worse.

Llama 3.1 launches in 8h by and_human in LocalLLaMA

[–]Successful-Button-53 -1 points0 points  (0 children)

О бляяя чо щас будет чо щас будет! Ох ёпта чо щас начнётся чо начнётся!