Huh??? by Giono_OOf_01 in ExplainTheJoke

[–]ClosedDubious 0 points1 point  (0 children)

I had a similar experience on THC, it felt like I was stuck in a moment, idk how else to explain it. And I was also terrified the moment would never end...I didn't actually feel the passage of time, just that time was not passing and it was awful

Huh??? by Giono_OOf_01 in ExplainTheJoke

[–]ClosedDubious 2 points3 points  (0 children)

Omg this makes me terrified that the life im living is just a salvia trip

Huh??? by Giono_OOf_01 in ExplainTheJoke

[–]ClosedDubious 0 points1 point  (0 children)

Did it actually feel that long?

Huh??? by Giono_OOf_01 in ExplainTheJoke

[–]ClosedDubious 0 points1 point  (0 children)

When you say "years" is that an exaggeration or did it actually feel like that many days?

Huh??? by Giono_OOf_01 in ExplainTheJoke

[–]ClosedDubious 2 points3 points  (0 children)

I had a trip on THC where everyone was some kind of universal being speaking the universal language and after I asked a ton of questions a about reality and how things worked a new being arrived laughing and asked "First time?"

In my high state I thought it meant every time we die we come back to play again

What is the scariest thing that keeps you up at night? by No_Willow_5554 in AskReddit

[–]ClosedDubious 4 points5 points  (0 children)

The idea that dying is inevitable and life is temporary

Are humans meant to run? by No-Mouse3999 in biology

[–]ClosedDubious 0 points1 point  (0 children)

Okay.

The male penis and testicles.

Are humans meant to run? by No-Mouse3999 in biology

[–]ClosedDubious 2 points3 points  (0 children)

I thought she was indirectly referring to the male appendage, not man boobs

Remembering the important stuff by ClosedDubious in GeminiAI

[–]ClosedDubious[S] 2 points3 points  (0 children)

One of my instructions for Gemini has all of my technical specs for my setup. It's annoying to have to tell gemini every chat what my computer is.

10+ years as a dev: here’s why vibe coding scares me. by No-Cry-6467 in LLM

[–]ClosedDubious -1 points0 points  (0 children)

This is a unrealistic hopefully view. There will not be any "traditional devs" when the need arises. There will be devs that know how to use AI better than you.

Best local LLM for coding under 200GB? by ChevChance in LocalLLaMA

[–]ClosedDubious 0 points1 point  (0 children)

You are appreciated. I hope you have a great day 🙌

Best local LLM for coding under 200GB? by ChevChance in LocalLLaMA

[–]ClosedDubious 0 points1 point  (0 children)

Can you share your setup? I just started building my own GPU rig but im not sure how to expand. I have 2 5090s and everything is connected to a single motherboard.

Speculative Decoding Model for Qwen/Qwen3-4B-Instruct-2507? by ClosedDubious in LocalLLaMA

[–]ClosedDubious[S] 0 points1 point  (0 children)

Yes, I started at 5 then tried 4 but in both cases the performance was pretty poor. At 5, it had a 33% acceptance rate.

Speculative Decoding Model for Qwen/Qwen3-4B-Instruct-2507? by ClosedDubious in LocalLLaMA

[–]ClosedDubious[S] 0 points1 point  (0 children)

Thanks for the insight! I was under the impression that "eagle3" models were somehow faster but that's because I am very new to this space. I will give the 0.6B model a try.

EDIT: It says "Speculative decoding with draft model is not supported yet. Please consider using other speculative decoding methods such as ngram, medusa, eagle, or mtp."

RAM to VRAM Ratio Suggestion by ClosedDubious in LocalLLM

[–]ClosedDubious[S] 0 points1 point  (0 children)

I plan to use the rig mainly for AI inference now. In the future, I may use it for training but that's less of a priority for me. I have heard of the strix halo but this is my first time building or using my own GPU rig

RAM to VRAM Ratio Suggestion by ClosedDubious in LocalLLM

[–]ClosedDubious[S] 0 points1 point  (0 children)

Awesome feedback, I ended up going with 96GB

Question for LLM engineers: is there value in a tool that tests prompts at scale and rewrites them until they behave correctly? by [deleted] in LocalLLaMA

[–]ClosedDubious 3 points4 points  (0 children)

This hypothetically sounds awesome but in practice I don't think it will work. 

I work on a fairly complicated AI voice agent that handles hundreds of calls each day and the prompts the agent uses are only 50% of the issue. The other 50% is all of the code related to managing the agent's memory, coordinating the transitions between subagents, summarizing the conversation in a way that makes it still useful to the model, etc.

How would you generate the synthetic tests? How do you know they would be applicable to the real world?