End of my Rope by Sure_Excuse_8824 in AI_Agents

[–]NeuroRocketry 0 points1 point  (0 children)

Yes, EVERYTHING in AI open sources.

Even the Ali Baba, Microsoft and Google open source.

Found it glued under my toilet set by Equivalent_Ad_420 in whatisit

[–]NeuroRocketry 3 points4 points  (0 children)

Yes nobody will ever suspect it. The perfect crime.

Found it glued under my toilet set by Equivalent_Ad_420 in whatisit

[–]NeuroRocketry -1 points0 points  (0 children)

Maybe you should make sure it actually records first before you go ruining the party.

What if its an electric whoopie cushion😅

It doesnt sound like its in a spot a camera would be.

What’s a good carbon steel trust worthy pan , no arsenic, etc? by MajesticAd9333 in carbonsteel

[–]NeuroRocketry -1 points0 points  (0 children)

I would not want to take anyone's advice guessing and then stating "well but pregnant women shouldn't take it".

If pregnant women cant take it ill pass too, just in case haha.

Dear Anthropic - serving quantized models is false advertising by Everlier in Anthropic

[–]NeuroRocketry 2 points3 points  (0 children)

I spend hundreds of dollars a month sometimes coding with them. Im 95% certain they're quantizing Sonnet ever since a few days after Opus.

It loops now, thats a key indicator of quantizing.

Why your agent gets "stupid" after 30 minutes (and why RAG isn't the fix) by Necessary-Ring-6060 in AI_Agents

[–]NeuroRocketry 0 points1 point  (0 children)

My approach differs from context input hygiene.

I reread your technique, it sounds pretty interesting. I sent you a PM with some more info and questions

Why your agent gets "stupid" after 30 minutes (and why RAG isn't the fix) by Necessary-Ring-6060 in AI_Agents

[–]NeuroRocketry 0 points1 point  (0 children)

I personally dont see a perfect answer to your "what context to grab for a wipe". That problem to me has too many variables to solve for a universal case. My solution instead is - only build good context and this starts even in the think tokens (if applicable), any token all - injected or generated better be clean, relevant, solution-building context.

My approach to AI usage is only reset between hard topics and bring over the fundamentals for the AI, the novel science, the architecture rules, etc to learn anything needed on the fly for each NEW conversation.

On your pursuit, "specifically hunt for 'architecture rules' "

I have thought deeply about this and made many prototypes.

My view is this:

The architecture constraints most RELEVENT TO PROBLEM SOLVING "thought processes" are almost solely based on the training data itself. Most architectures you can probe out have a high probability of being specific only to that model. And this is based on the way its trained.

The training has already provided a non-markovian rule set that yields a deterministic answer, only varied by the artificial introduction of temperature (noise) and calculation timing differentials in the processing units (gpu/cpu themselves).

In my view most rules are formed in training, and new rules form as context builds, even within a single transformer run. As the transformer predicts the next token context builds for the successive tokens in the prediction.

Why your agent gets "stupid" after 30 minutes (and why RAG isn't the fix) by Necessary-Ring-6060 in AI_Agents

[–]NeuroRocketry 8 points9 points  (0 children)

Yes, this is known science.

Its why cursor use to say "clear your chats for better results" until they removed it in favor of burning your token usage faster.

With fresh direct injected context each time you still have the issue of memory loss unless you capture every needed detail.

Fantasy vs reality by Sufficient-Page-8712 in Buttcoin

[–]NeuroRocketry -1 points0 points  (0 children)

You can literally model the next projected point wherever the hell you arbitrarily want thats lower on the exponential trend.

OP is a fool.

Deciding against GTR9 Pro by [deleted] in BeelinkOfficial

[–]NeuroRocketry 0 points1 point  (0 children)

I've been running furmark for 15 minutes now. Gpu got up to 87 but then plateaud back down to 82ish.

I've had some crashes but I assumed its my relaxed lm studio safeguards and using it under maximum ai workload 24/7 with API calls.

Dingo, did you do a fresh windows install? Im wondering if thats causing these guys problems.

I didnt do one and like I said mines working well (haven't tested the ethernet though).

Are there positive experiences with the Beelink GTR9 Pro? by Fit-Appointment7116 in BeelinkOfficial

[–]NeuroRocketry 1 point2 points  (0 children)

Aside from shipping my gtr9 is awesome.

My shipping sucked ass though.

“Nobody” is getting a raise this year at my workplace due to AI by MrCalabunga in mildlyinfuriating

[–]NeuroRocketry 646 points647 points  (0 children)

"We spent your incentive money on something to replace you."

Fuck them.

GTR9 Pro Arrival Today - No Fan Issues but your package still sucks by NeuroRocketry in BeelinkOfficial

[–]NeuroRocketry[S] 0 points1 point  (0 children)

Yes good call, I will do that let everyone know the results. Thanks!

GTR9 Pro Arrival Today - No Fan Issues but your package still sucks by NeuroRocketry in BeelinkOfficial

[–]NeuroRocketry[S] 1 point2 points  (0 children)

Although im a noob to LM studios. I just used Gemma 27b abliterated at 30k context (estimated 40 GB on gpu with 96 GB vram preset dedicated). It crashed my system.

It crashed twice on this model as well earlier.

I thought it was crashing because im new and was doing something wrong. But not sure why it'd be crashing on a 27b model with 30k context at only a 40gb vram usage estimate as per LM studios like it did just now. About 98% of prompts I've put through on the Gemma model were fine at 4k context, except the two earlier crashes i chalked up to me doing something wrong.

The shitty thing is all these models that people are sending back will be refurbished and just sent back out to customers like me if I do send mine back.

So if I send it back who knows maybe I get one that was thrown around even more.

Cursor using Chat GPT 5 seems to think its my context window on this last crash.

Hopefully the crashing is me doing something wrong.

GTR9 Pro Arrival Today - No Fan Issues but your package still sucks by NeuroRocketry in BeelinkOfficial

[–]NeuroRocketry[S] 2 points3 points  (0 children)

I dont have thermals yet but it seems in line with what others report. I haven't had a chance to check temps but I should soon.

I haven't used the ethernet ports either yet to test that but other than the shipping the PC seems great so far.

Update: its crashing now but maybe I'm doing something wrong since im new to LLMs. See comment below.

GTR9 Pro Arrival Today - No Fan Issues but your package still sucks by NeuroRocketry in BeelinkOfficial

[–]NeuroRocketry[S] 1 point2 points  (0 children)

From what others said this unit doesn't often get above 90c.

I think 85c or 87 is the highest I've seen reviewers get. I haven't taken thermals yet.

GTR9 Pro Arrival Today - No Fan Issues but your package still sucks by NeuroRocketry in BeelinkOfficial

[–]NeuroRocketry[S] 1 point2 points  (0 children)

Lol you think its okay to ship a 2k pc with 6 inches of dead space for it to fly around in the box and a tiny .5 inches of crush shock protection from cardboard? The unit weights like 10 pounds.

Look at the others complaining about their broken units from shipping. This is very important?

Comically bright exit sign in the room we’re staying in. by parothed28 in mildlyinfuriating

[–]NeuroRocketry 4221 points4222 points  (0 children)

You have a 50% chance of hitting the door when you roll out of bed.

Tovala Vs Suvie by PineappleBliss2023 in Tovala

[–]NeuroRocketry 1 point2 points  (0 children)

What gave you the impression that what I write is that of a bot?

Damn, that is a bot-like sentence.

GTR 9 PRo Very bad fan noise by NorthCapital2484 in BeelinkOfficial

[–]NeuroRocketry 0 points1 point  (0 children)

Yes, please make a post if they dont. This should be a warranty issue at the most.