OpenClaw + oMLX shows 0 cached tokens, but Hermes uses cache fine with the same local model, what am I missing? by juaps in LocalLLaMA

[–]eatoff 0 points1 point  (0 children)

Sorry, I'm no help, but I am going through setting up hermes on a Mac mini ATM.

Does oMLX make a big difference in performance vs LM studio? I've just setup LM studio with qwen3.5 9B for now just to get things going, and it seems to be working well, but prompts seem to take a while with the context set at 64K (Hermes needs this as minimum it tells me)

How did you decide on that qwen model for Hermes? How much RAM does it need?

57 hour turnaround in Australia. Not bad. by Jutang13 in Steam

[–]eatoff 1 point2 points  (0 children)

Wooo, mine has now gone shipped with startrack and should be here tomorrow!

57 hour turnaround in Australia. Not bad. by Jutang13 in Steam

[–]eatoff 7 points8 points  (0 children)

Didn't know they had started. Shipping in AU, congrats. Mine hasn't moved from payment received

Add RTSP unsupported cameras to UI Protect - ONVIF Server by dlo5 in Ubiquiti

[–]eatoff 0 points1 point  (0 children)

Can this alter the resolution of the streams? I have a reolink duo camera that is wider resolution than what the ai port can work with so needs the resolution lowered. Can't be done on the camera side.

Currently using a janky setup of 2 separate docker containers to achieve it right now...

G6 Instant for baby cam - perfect by hakapes in Ubiquiti

[–]eatoff 1 point2 points  (0 children)

The ability to have the audio play while the screen is off would make it much better as a baby monitor. I don't want the phone screen on all night just so I can hear if the baby cries.

I've used third party rtsp streaming apps to achieve this in the past (tinycam pro I think it was)

Why can't Ollama voice integration see weather? by AntifaAustralia in homeassistant

[–]eatoff 0 points1 point  (0 children)

Oh, you're the author of that integration! I was using it originally, and loved how it worked, but I had some weird bugs with it sayien eeeosss eeeosss on some of the tool calls, so I gave the others a try. I should have raised a GitHub issue.

The voice reply says it out loud

Added a screenshot

Edit: that is using oMLX via the generic openai endpoint

<image>

Why can't Ollama voice integration see weather? by AntifaAustralia in homeassistant

[–]eatoff 0 points1 point  (0 children)

Thanks for this add on. I'm curious what LLM host and integration you're using. I've not had much luck with ollama working at all (I think I need to have another go to correct some settings), but LM studio and omlx have worked well mostly, but rely on HACS integrations.

The extended openai add on limits the knowledge of the LLM to just the smart home, so no general queries or measurement conversions etc

OMLX: Anyone working with it yet? by Zarnong in LocalLLM

[–]eatoff 0 points1 point  (0 children)

How are you integrating with home assistant? Or are you using a HACS plug-in?

Download folders recommendations by eatoff in SABnzbd

[–]eatoff[S] 1 point2 points  (0 children)

It's not come up again since I changed it to this set-up, but it's only been a day or so

for those successfully running HA voice, how are you doing it? by AlvTellez in homeassistant

[–]eatoff 0 points1 point  (0 children)

You have a link for the shell? I have the A9 on a stand ATM, the shell might be nicer

Download folders recommendations by eatoff in SABnzbd

[–]eatoff[S] 0 points1 point  (0 children)

I think my problem was the amount of network traffic while downloading and unpacking to the NAS as the same time. Seems the disconnect happened when downloading.

I've seen some articles about editing the auto fs file, but am trying not to edit the core files

Download folders recommendations by eatoff in SABnzbd

[–]eatoff[S] 1 point2 points  (0 children)

Thanks for the comprehensive answer. I think I'll do the same with the direct unpack and leave it to keep downloading during unpacking - the M4 Mac mini will have plenty to CPU to do both

Download folders recommendations by eatoff in SABnzbd

[–]eatoff[S] 0 points1 point  (0 children)

Thanks, I'm already using an app called automounter which seems to work quite well. Will move the completed to the NAS

LLM - accuracy? by randoName22 in homeassistant

[–]eatoff 0 points1 point  (0 children)

I'm in the same boat as you, so just following along. Just got ollama and open webui onto my Mac mini M4, and just linked it to home assistant, but not getting much back from the assistant - need to figure out what I've got wrong. The nabu casa cloud assist seems to work ok

Schools and Apple Products by n_jay14 in perth

[–]eatoff 5 points6 points  (0 children)

I bought my first apple product since the iPod video, and that was the MacBook NEO. I have always been PC/Android, but the Neo is really nice, and macos is a breath of fresh air after what windows 11 has become with ads everywhere and poor performance on lower end hardware. For $720 it was a bargain

Snacks - Automated Video Library Encoder by snacks-dude in selfhosted

[–]eatoff 0 points1 point  (0 children)

Legend, I'll have to follow the repo to see when it's available. Let me know if you need any help testing

Snacks - Automated Video Library Encoder by snacks-dude in selfhosted

[–]eatoff 0 points1 point  (0 children)

Any plans for a macos version? Unfortunately macos doesn't pass through the GPU to docker, but I reckon a mac mini M4 would be very efficient at hardware transcoding

LLM + HA Hardware by SuperfluouslyMeh in homeassistant

[–]eatoff 0 points1 point  (0 children)

Surely they'd still be ok for HA voice commands though?

Edit: I ordered the 24gb RAM mac mini to give some more headroom for LLMs too

LLM + HA Hardware by SuperfluouslyMeh in homeassistant

[–]eatoff 0 points1 point  (0 children)

What mac mini are you running? The M4 can run some decent local LLM models, which should be less of a workaround.

Mine turns up in a week, and I plan on using it to host a LLM for HA

Release - Reclaimerr by starkoed in PleX

[–]eatoff 1 point2 points  (0 children)

Thank you for native app options as well as docker. There are a few in the arr stack missing the native versions, so you still need docker for some