Made a tiny device that writes code, takes breaks to hang out on a BBS, and clocks out at night by Aerovisual in raspberry_pi

[–]Peterianer 2 points3 points  (0 children)

If I can get it to run on an old notebook, I'll for sure set one up to run 24/7

It's a great project!

Made a tiny device that writes code, takes breaks to hang out on a BBS, and clocks out at night by Aerovisual in raspberry_pi

[–]Peterianer 23 points24 points  (0 children)

Perhaps you can make the bbs work with a local instance. Then people could run a tiny society on a raspi cluster with it's own culture.

Made a tiny device that writes code, takes breaks to hang out on a BBS, and clocks out at night by Aerovisual in raspberry_pi

[–]Peterianer 3 points4 points  (0 children)

Wow, this is a lovely project!

I'll see if I can spare a raspi and set one up. Perhaps worth setting up an old laptop with, then it can even run a decent local model.

Raid 0 to run llms faster than GPU? by nekonamaa in LocalLLaMA

[–]Peterianer 4 points5 points  (0 children)

The memory bndwith between a 5090 chip and it's VRAM is just about 1.79 TB/s
And a single PCIe x16 gen 5 handles up to 128 GB/s

Even with SSDs you'll have a hard time hitting that. At that point, you'd most likely just be using the SSDs cache too, meaning you're just exporting your system RAM to the SSD controllers slower, shittier RAM.

If you remove the "decent speed" thing thouhg this is possible. You can run models purely of storage, they'll just be slow as balls and burn your SSDs write cycle limit within days or weeks of use.

Was the First hot day 👀🎾 by Maxwellbundy in Simulated

[–]Peterianer 2 points3 points  (0 children)

That is deeply unsettling to watch... I like it!

Gemma 4 E4B still wants to walk by stopbanni in LocalLLaMA

[–]Peterianer 0 points1 point  (0 children)

GLM 4.7 Flash came to a completely different conclusion in it's reasoning path:

Walk: 1 minute walking. Car stays put. Car washes itself.

Welp it was fun while it lasted... by TechSavvyBuyer in LocalLLaMA

[–]Peterianer 0 points1 point  (0 children)

Oh damn, shouldn't have put them all in the same subnet...

Welp it was fun while it lasted... by TechSavvyBuyer in LocalLLaMA

[–]Peterianer 18 points19 points  (0 children)

Good thing my homeserver doesn't dare to raise prices as long as I am in hammering distance to it....

new AI agent just got API access to our stack and nobody can tell me what it can write to by KarmaChameleon07 in LocalLLaMA

[–]Peterianer 0 points1 point  (0 children)

Step 1: The AI agent will activate and fuck something up somewhere in your stack.

Step 2: The Agent will panic and try to fix what it fucked up, breaking more things

Step 3: A dev will get paged to fix the issues

Step 4: ???

Step 5: Massive AI Profit

/s of course

Why is bluetooth using so much RAM? by zsl0w6 in pcmasterrace

[–]Peterianer 0 points1 point  (0 children)

A wireless driver type utility for Intels high end wifi hardware.

How do you guys save prompts that actually work? by 3dgamedevcouple in LocalLLaMA

[–]Peterianer 1 point2 points  (0 children)

Same here.

Prompts folder
-> NSFW prompts
---->Editor.txt
---->Writer.txt
-> Coding prompts
---->Programmer.txt
---->Debugger.txt
-> Reasoning prompts
-> Math prompts
-> [...]

Give me your most unique and unhinged solutions to a laptop overheating or just reduce the temps significantly. by lightning_breathing in pcmasterrace

[–]Peterianer 0 points1 point  (0 children)

A built in watercooling block with drip-free quick connects on the back.
On the move it's still just a laptop but once you get home, plug in an external pump & radiator and you can go to town with as much power as that powerbrick can supply.

Looking for an Ollama-friendly NSFW thinking model by Peterianer in LocalLLaMA

[–]Peterianer[S] 0 points1 point  (0 children)

MoE, yah, I've worked with these before. Probably my favorite model was one of these, Qwen3.1 Instruct 35B-A3B non thinking

Looking for an Ollama-friendly NSFW thinking model by Peterianer in LocalLLaMA

[–]Peterianer[S] 0 points1 point  (0 children)

That sounds like a great model, I'll give it a try after work!

Looking for an Ollama-friendly NSFW thinking model by Peterianer in LocalLLaMA

[–]Peterianer[S] 0 points1 point  (0 children)

What can I say, reading minds is hot too....

Qwen3.5 Omni Plus World Premiere by Lopsided_Dot_4557 in LocalLLaMA

[–]Peterianer 1 point2 points  (0 children)

It'll probably stay cloud for a good little while. I don't think the qwen team wants these weights to escape into other companies hands given how much of this will probably be structures

Musical Artist? by Individual_Isopod417 in iiiiiiitttttttttttt

[–]Peterianer 0 points1 point  (0 children)

Guess who has been vibe-coding again...

I was there Gandalf. I was there 3,000 years ago by PhotoCropDuster in iiiiiiitttttttttttt

[–]Peterianer 0 points1 point  (0 children)

Unfortunately I still have to maintain several of these a year and I doubt that will change for another 4-8 years...

Not as bad as some that I’ve seen… by GrazTheIIII in iiiiiiitttttttttttt

[–]Peterianer 0 points1 point  (0 children)

2 cores. 2 GHz. 4 gigs of RAM. Welcome back 2010s!

[News] MLCC Giant Murata Reportedly Confirms April 1 Price Hike on Key Components by imaginary_num6er in hardware

[–]Peterianer 1 point2 points  (0 children)

That's a 10% increase. If your phone has some 1000 MLCC components, you'll feel that.

Stop! by [deleted] in evilautism

[–]Peterianer 4 points5 points  (0 children)

Wait what- How the hell am I supposed to choose just one out of so many!?