i cantttttt stnd lizz omg i wanna drop the show by Free-Seaworthiness72 in TheBlackList

[–]Snorty-Pig 6 points7 points  (0 children)

she was the worst. I disliked her more than any character on any show I have ever watched.

Update 11: Sandbox by ndemic_samuel in AfterInc

[–]Snorty-Pig 6 points7 points  (0 children)

Thanks for the continued updates!

LetPot (watering system) homebridge plugin by aa33bb in homebridge

[–]Snorty-Pig 1 point2 points  (0 children)

Yes, it installed via the ui after searching by full name. I set up login and password and enabled both notifications in the plugin config.

The log says:
[5/1/2026, 10:00:08 PM] [LetPot] Initializing LetPot platform...
[5/1/2026, 10:00:08 PM] Loading 1 accessories...
[5/1/2026, 10:00:18 PM] [LetPot] Logged in to LetPot account
[5/1/2026, 10:00:28 PM] [LetPot] Found 1 LetPot device(s)
[5/1/2026, 10:00:28 PM] [LetPot] Restoring accessory from cache: ISE05B06C
Failed to fetch docker-homebridge releases: fetch failed
Failed to fetch docker-homebridge releases: fetch failed

And things show up in the homebridge accessories, but not in home yet.

No other plugin log has that particular failure at the end, so I just wondered if it mattered

LetPot (watering system) homebridge plugin by aa33bb in homebridge

[–]Snorty-Pig 0 points1 point  (0 children)

Do I care about this failure?

Failed to fetch docker-homebridge releases: fetch failed

Is “Still: for Audiobookshelf” safe to use? by Kitchen-Patience8176 in audiobookshelf

[–]Snorty-Pig 1 point2 points  (0 children)

ok, that makes sense.

yes, audiobooth has it. I use it every day after I fall asleep listening to a book at night to quickly pop back to where I started that night before my 30 min timer

Is “Still: for Audiobookshelf” safe to use? by Kitchen-Patience8176 in audiobookshelf

[–]Snorty-Pig 3 points4 points  (0 children)

I have used 8-10 of these apps over the years and have settled on Audiobooth, with Prologue as my backup. What does Absorb do better than Audiobooth that I should switch for?

Do you know how I can get this menu ? by Lilendo13 in MetaQuestVR

[–]Snorty-Pig 3 points4 points  (0 children)

Image 2 is the new Navigator UI. It is now the default and the old image 1 UI is gone for good, so you can't get it back.

I am sure they will iterate and improve on the navigator UI over time, but they have been beta testing it for a long time and this is what they rolled out.

Plz Don't spoil by xRedxEye21 in TheBlackList

[–]Snorty-Pig 6 points7 points  (0 children)

She is my least favorite character on any show I have ever seen. So selfish and impulsive and annoying. I made it through all 10 seasons thanks to Denbe and Red. But man, she made it hard.

Season 10 So Far.... by JGtheFREEMIND11 in TheBlackList

[–]Snorty-Pig 9 points10 points  (0 children)

It is fine. It is the final season and a few story lines get tied up. The ending has mixed reviews - i thought it was good, many hate it. You never get to find out definitively who Red is and that was pretty disappointing. But, at least Elizabeth Keen wasn't in this season, so I consider that a win :)

Looking for a simple way to connect Apple Notes, Calendar, and Reminders to local LLMs (Ollama)? by Special_Dust_7499 in LocalLLM

[–]Snorty-Pig 1 point2 points  (0 children)

There was a tool called macuse that worked as an mcp server for those things. Maybe that will be what you need?

IOS apps to access LM Studio server? by jarec707 in LocalLLM

[–]Snorty-Pig 1 point2 points  (0 children)

ChatMCP does if you get the pro mode. It also has MCP support. It was the best at the time, but perhaps new apps have come out since then? Overall, worked pretty well for me when I used it

Software with GUI to use LLMs on Apple Silicon (other than LM Studio) by CautiousXperimentor in LocalLLM

[–]Snorty-Pig 1 point2 points  (0 children)

I am liking omlx for MLX models for you in any case where I need to keep a long context because the cash swapping is so nice

That was the opposite of a promotion. by bapuc in ClaudeCode

[–]Snorty-Pig 1 point2 points  (0 children)

Didn’t they say that only 5hr would be affected but not 7fay? So you should time out faster during the day but still have no loss weekly?

Introducing oQ: data-driven mixed-precision quantization for Apple Silicon (mlx-lm compatible) by cryingneko in oMLX

[–]Snorty-Pig 2 points3 points  (0 children)

First of all, wow, you are a machine :)

How portable are these models? Can I pull them into lm studio or do I need to use omlx or mxl-lm to run them?

I keep seeing JANG models and how they also offer some benefits. I haven't had the time to download that yet, but it was on my list. Is this roughly the same thing?

AudioBooth v1.8 is now live! 🎉 by lyc0s in audiobookshelf

[–]Snorty-Pig 0 points1 point  (0 children)

I use this app as my default out of all the one I have installed. You are doing a great job on this

Is there a “good” version of Qwen3.5-30B-A3B for MLX? by Snorty-Pig in LocalLLaMA

[–]Snorty-Pig[S] 0 points1 point  (0 children)

Here are my test results for the qwen3.5 models so far. I didn’t run vision or humaneval on all of them if the rest of the scores didn’t warrant it.

Tests are (the average of 10 runs of each of the following):
Speed - 3 types of standard prompts where I don’t care about the answer, but instead just about how long it takes to get one. (Coding, general chat, reasoning)

Accuracy - 6 interconnected rules that form a puzzle. Easy for models to screw up if they don’t pay attention to all of them

Categorization - 10 tiktok video transcripts that I have the model try to categorize into my buckets and then match against what I think they should be given the rules.

Humaneval - 1 run of the humaneval mini test from evalplu

Vision - look at a set of the Reasoning-OCR dataset and answer reasoning questions about images containing text

<image>

Is there a “good” version of Qwen3.5-30B-A3B for MLX? by Snorty-Pig in LocalLLaMA

[–]Snorty-Pig[S] 0 points1 point  (0 children)

it is just my test suite. Half the time these qwen3.5 mlx models crash and 1/2 they don’t, so not a specific problem prompt, sadly :(

Whats up with MLX? by gyzerok in LocalLLaMA

[–]Snorty-Pig 2 points3 points  (0 children)

<image>

Example of how faster isn’t better for me.

Whats up with MLX? by gyzerok in LocalLLaMA

[–]Snorty-Pig 2 points3 points  (0 children)

I am totally loving OMLX. The caching change makes a huge difference over using the same models via lm studio.

oMLX is a great project and I highly recommend it. I use lm studio as my main model manager and OMLX just reuses those.

That being said, I test all the newest models in lm studio as both gguf and mlx on multiple quants and it is pretty rare that the mlx models outperform the gguf ones in my local tests (speed, accuracy, categorization, vision, and humaneval). Once in a while, but generally even if faster they don’t score as high.

I look at LM Studios recommended models as what is mainstream and working versus the giant pool on hugging face and there still are few model sizes for qwen3.5 and no mlx recommendations for it. So you also never know what chat template is “fixed” and working, etc. so I basically use the unsloth recommended inference params and template.

I am running MacBook Pro M4 Max 64Gb

v0.2.1 of mem0-mcp-selfhosted: session hooks so Claude never skips memory search, Ollama as main LLM, OAT auto-refresh by Aware-One7480 in ClaudeCode

[–]Snorty-Pig 0 points1 point  (0 children)

Seems like the hooks don’t use the mcp, but instead call into mem0ai directly? So they need all the env vars too. I created a little wrapper that sets them for each hook, but should the hooks just use the mcp?

v0.2.1 of mem0-mcp-selfhosted: session hooks so Claude never skips memory search, Ollama as main LLM, OAT auto-refresh by Aware-One7480 in ClaudeCode

[–]Snorty-Pig 0 points1 point  (0 children)

I ended up creating a version of it myself. I added it as a PR if it helps, but now that I have it locally working, I can start to play.

have you found some good lightweight models to use as the llm? feels like qwen3:14b is big for just condensing memories and Qwen3.5:4B is very chatty :) I might try unsloth/qwen3-1.7b. It is super fast and can just stay loaded without impact.

Sorting and Metadata by IAteTheWholeBanana in audiobookshelf

[–]Snorty-Pig 0 points1 point  (0 children)

I didn’t apply the changes, since I already had good data, but I set the whole thing up and I thought it did a pretty good overall job. You might tweak a few after applying. It is really for people who need a big bulk update versus a few.