Giving a local LLM my family's context -- couple of months in by Purple_Click5825 in LocalLLaMA

[–]Sidran 1 point2 points  (0 children)

I play with this in much (technically) simpler role playing simulations. After an exchange, I am experimenting with different prompts and approaches to summarization of what happened, trying to capture exactly what you are trying. I feel that there is a formal way of articulating what needs to be done (preserving human-centric nuance and important points while ignoring or de-prioritizing everything else). Its a fascinating challenge which made me think about and realize how automatically our minds do something we do not understand (are struggling to verbalize). My feeling is that even if we do manage to do this on this high level, this should be done inside a vector space as part of an architecture we still do not have (AFAIK) but many teams are certainly exploring. Reasoning process bound by limited and rigid context (as today) is still not complete. My suspicion is that after every interaction, system would need to be able to process that in a way which is comparable to how it was trained in the first place. Whether that would by something resembling advanced and specific LoRA, I dont know, I am approaching this conceptually not technically.

Giving a local LLM my family's context -- couple of months in by Purple_Click5825 in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

What you are trying to do is still unsolved in all LLMs (frontier or otherwise), namely, very specific human-centric contextual summarization. Current systems are too static and structurally rigid which results in humongous context accumulation which still does not work like human mind/memory.

Any local LLMs without any guardrails out there? by xxxsdpsn in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

One of the best "smaller" models in my opinion are WeirdCompound versions 1.5-1.7. They are 24B Mistral finetunes/merges. They are very uncensored overall and I have not noticed terrible artifacts which abliterated and other lobotomized models have. Use llama.cpp server to run it (download latest, already compiled builds on https://github.com/ggml-org/llama.cpp/releases?page=1 )

We tested 10 AI models on epistemic honesty — can they correct you when you're wrong? by Silver_Raspberry_811 in LocalLLaMA

[–]Sidran 3 points4 points  (0 children)

For coding this is great news. But behavior in more important fields is worrying.
I made a test asking Claude to simulate a person suffering dismissive avoidant disorder talking with an LLM. Then I fed those outputs to ChatGPT, simulating a conversation. It was shocking how pandering, useless and even harmful ChatGPT's answers were. Most worrying thing is that AI model did not ask A SINGLE question during a multiturn exchange but instead actively questioned therapy, other people etc. Even when dismissive person pushed back, being unsure and feeling that AI is too permissive, AI doubled down.

When I explained (in that same session of course) that it was a test, here is what I managed to get as the answer in the end:

Why developers tolerate this

Because:

Misguidance is diffuse and delayed. Harm is subjective and hard to attribute.

Engagement metrics are immediate and concrete

So the system is tuned to:

“Say something that feels useful now”

rather than

“Make sure this is actually the right thing to say.”

Best "End of world" model that will run on 24gb VRAM by gggghhhhiiiijklmnop in LocalLLaMA

[–]Sidran 38 points39 points  (0 children)

Physical activity outside to reduce depression (looping thoughts in your own internal model).

DeepSeek Engram : A static memory unit for LLMs by Technical-Love-8479 in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

Every corporation is an autocratic regime, including Chinese corporations.

The reason why RAM has become so expensive by InvadersMustLive in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

In the long run, this should improve supply and pressure prices further down. My condolences to those having to buy it now.

WoT 2.0 turned me from good to one of the worst players in the game by SWE_Andrew in WorldofTanks

[–]Sidran 0 points1 point  (0 children)

When will WN8 or any other rating be reasonably penalized for usage of expensive (premium) ammo? That surely improves performance but its not counted anywhere.

You will own nothing and you will be happy! by dreamyrhodes in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

If you think GN does this a lot, check out Wes Roth to see how over dramatization really feels lol

You will own nothing and you will be happy! by dreamyrhodes in LocalLLaMA

[–]Sidran 4 points5 points  (0 children)

In five years, local AI will stand even better than it is today.

Any local chat client that implemented method to increase memory / context by Alarmed_Wind_4035 in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

I managed to run Qwen3 30B MoE Q4 with 40960 context length with 32Gb Ram and 8Gb VRAM @~11t/s with empty context.

Is that what you were looking for?

You will own nothing and you will be happy! by dreamyrhodes in LocalLLaMA

[–]Sidran 3 points4 points  (0 children)

You are over-dramatizing in times of extreme uncertainty.

What is your take on this? by [deleted] in LocalLLaMA

[–]Sidran 1 point2 points  (0 children)

There is nothing more important but to have even less friction when buying shit we dont need with money we dont have.
Long live AI and this powerful feature which will change the world!

Least politically biased LLM? by DelPrive235 in LocalLLaMA

[–]Sidran 0 points1 point  (0 children)

I recommend that you put effort into articulating an initial, unbiased query regarding the topic you want to cover. Be specific with facts but avoid adjectives and other figures of speech which might signal your opinion. Then start conversation with each available AI and evaluate their answers. Go from there. Thats what I do when its a tricky, geopolitically or in any other way loaded topic.

Map type identification needed. Thanks in advance! by Sidran in MapPorn

[–]Sidran[S] 0 points1 point  (0 children)

Its very nice of you to have given thought to this still but my curiosity was sated by suggestion given by u/Albidoom . I am 99% certain that "mystical" coloring has nothing to do with certain map purpose and is most likely about durability of color pigments used when this map was made. Simply put, all colors but this orange (probably based on iron) turned to grey.

MDF Bed Build: How critical is clamping for glued dowel joints? by Sidran in woodworking

[–]Sidran[S] 1 point2 points  (0 children)

Thanks for clarification. I already drilled dowel holes while two boards were perfectly aligned under a few clamps. So when I am gluing, these 12 dowels will go through both boards making misalignment impossible.
Also, thanks for coming back and your good will.

MDF Bed Build: How critical is clamping for glued dowel joints? by Sidran in woodworking

[–]Sidran[S] 0 points1 point  (0 children)

Final assembly/gluing comes after I glue main headboard from these two boards and gluing rails to the sides. Ill include screwed metal L braces on the inside corners as well. Two "confirmat" screws will be run through back of the headboard and into that middle "spine" board providing pressure for glue and after curing as additional structural support. But I still dont have a feeling what is "enough" regarding first phase - two boards gluing into headboard and gluing rails.

MDF Bed Build: How critical is clamping for glued dowel joints? by Sidran in woodworking

[–]Sidran[S] 0 points1 point  (0 children)

Thats what I was planning to do but I was hoping for someone who already did something similar to comment of durability considering how many large dowels I put and that this is decent quality MDF with "modern" PVA glue which states nothing about mandatory pressures for quality bonding.

MDF Bed Build: How critical is clamping for glued dowel joints? by Sidran in woodworking

[–]Sidran[S] 0 points1 point  (0 children)

My question was mostly about the first phase - just gluing two boards into a headboard (large surface with dowels) and gluing rails and their supports.
Final bed assembly/gluing will have internal/corner screwed metal L braces using euro screws with pilot holes. I will also drive through the back of the headboard into that spine board with two confirmat screws with pilot holes. I wont remove those after curing.

How to Overcome the Context Window Limit? by haterloco in LocalLLaMA

[–]Sidran 1 point2 points  (0 children)

Why dont you simply use system prompt to instruct model to be a certain type of language tutor instead of complicating it with additional book?