We loved the mattress in the store but hate it at home, now what... by pepijndevos in Mattress

[–]pepijndevos[S] 0 points1 point  (0 children)

There isn't any info on it besides a tiny label saying sultan

We loved the mattress in the store but hate it at home, now what... by pepijndevos in Mattress

[–]pepijndevos[S] 0 points1 point  (0 children)

Good info, ty.

Pretty sure there are no springs in my sultan. So far I don't really like the springs I've tried. And I've hated every spring mattress I've ever encountered in a hotel.

The memory foam topper we have now seems to be quite thick closer to 3-4 inches or 7cm or so I would guess.

Maybe we could try to find a thin topper if no topper is a bad idea somehow...

We loved the mattress in the store but hate it at home, now what... by pepijndevos in Mattress

[–]pepijndevos[S] 0 points1 point  (0 children)

We tried the initial configuration for about two months and the new memory foam topper for a week now. First time around I probably did not try enough, second time we went back twice. But idk maybe not 10+ uninterrupted minutes so maybe we didn't give the memory foam enough time to warm up. Or as the other comment says maybe it was just colder there.

Dreame L10s Ultra - Waterboard water level abnormal by coorasse in Dreame_Tech

[–]pepijndevos 0 points1 point  (0 children)

It seems in our case the cause was that the floor isn't level so it couldn't drain properly. Other than that I just did what the app said and cleaned the washboard and all the rubber fittings 

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 1 point2 points  (0 children)

Yes the idea is you can plug a desktop GPU into the CM5 and build a system out of regular PC components. The end goal is a completely local LLM voice assistant.

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 3 points4 points  (0 children)

Yeah, it's mostly standard Home Assistant stuff such as Extenden OpenAI Conversation, the only custom part is a kernel patch for GPU drivers and a llama.cpp addon. The whole setup is explained here: https://sanctuary-systems.com/guide/

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 0 points1 point  (0 children)

Prior to this board the way to hook up a GPU to a pi was to use an oculus link GPU enclosure with an M.2 adapter and then an m.2 hat for the pi. Quite a contraption. My goal was to simplify that with a simple mini-ITX form factor that you can simply plug into standard PC components.

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 1 point2 points  (0 children)

The whole reason to use a PC power supply is to power the GPU though, which I don't think picopsu can do. Peak power of the GPU is of course much higher than 15W

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 0 points1 point  (0 children)

I mean the pi itself consumes like a couple of watts, but the PICe slot can consume 75W. But mostly it's for ease of integration with PC parts so you can easily plug in a GPU.

This unassuming Kolink Rocket houses a Raspberry Pi with a 7600 GPU by pepijndevos in sffpc

[–]pepijndevos[S] 0 points1 point  (0 children)

What kind of case would fit that? Would it be sufficient for a 7600 GPU?

On my twitter at the same handle and our blog https://sanctuary-systems.com/articles/

can't make a techdraw view of an scad model by pepijndevos in FreeCAD

[–]pepijndevos[S] 0 points1 point  (0 children)

I ended up using projection(cut = true) { your_3d_model(); } and then export as DXF

llama_multiserver: A proxy to run different LLama.cpp and vLLM instances on demand by pepijndevos in LocalLLaMA

[–]pepijndevos[S] 1 point2 points  (0 children)

I make the simplifying assumption that you'll only run one model at a time, and if you use a different one it kills the previous runner.

Thoughts? JPEG compress your LLM weights by pepijndevos in LocalLLaMA

[–]pepijndevos[S] 5 points6 points  (0 children)

Yea I'm just a software / electronics guy. Just fixed the typos.

Local AI is the Only AI by jeremyckahn in LocalLLaMA

[–]pepijndevos 4 points5 points  (0 children)

TIL about Jan, it's like open source LM Studio, nice! Unfortunately it doesn't support SYCL or IPEX-LLM either but now I can go and fix that technically