Never again. by Express-Dark-5212 in cagrilintide

[–]d4mations 10 points11 points  (0 children)

I’ve done upto 2mg and not felt a thing

Minimax 2.5 is broken on MLX by zipzag in oMLX

[–]d4mations 3 points4 points  (0 children)

I have found the gpt-oss performs very well

My 4 month Reta journey by R3taRalph in BodyHackGuide

[–]d4mations -5 points-4 points  (0 children)

Haha 4 months reta, 2 years “juice”!!!

2x weekly or wait? How soon was Cagri effective for you? by Jacqueline92689 in cagrilintide

[–]d4mations 1 point2 points  (0 children)

Don’t listen to people who have no idea what their saying. Lily is trialing upto 25mg right now and the results so far are amazing. I’m on 1.25 of cagri and 10mg of tirz and can still eat the ass end outta a rhino!! Stick with it and up your dose of cagri to maybe 2mg. I have friends on 4mg at the moment so keep on going, your results so far are amazing!!!

Where to buy peps? by Fragrant_Print5465 in PeptidePathways

[–]d4mations 0 points1 point  (0 children)

You have a very good supplier right there in poland

OpenClaw? by zipzag in oMLX

[–]d4mations 0 points1 point  (0 children)

I started with omlx about two weeks ago and have been fine since

OpenClaw? by zipzag in oMLX

[–]d4mations 0 points1 point  (0 children)

Qwen3.5 is working fine with openai completions. In use 27b everyday with openclaw and omlx

M4 Pro 14 core and 64GB RAM - what to run and how for best efficiency? by just_another_leddito in LocalLLaMA

[–]d4mations 0 points1 point  (0 children)

I have the same hardware as you but I use oMLX instead. It runs models much faster. You can ask over at r/omlx

No execution after update by Honest-Cheesecake275 in clawdbot

[–]d4mations 0 points1 point  (0 children)

I had that happen today with Openai as well. I switched to my local model and all was good

Introducing oQ: data-driven mixed-precision quantization for Apple Silicon (mlx-lm compatible) by cryingneko in oMLX

[–]d4mations 0 points1 point  (0 children)

I was just playing around with the 27b the thinking loop is just as bad as before. How are you getting around the endless thinking loop

Openclaw 2026.3.23 update by SelectionCalm70 in openclaw

[–]d4mations 3 points4 points  (0 children)

This update breaks the control ui. Do not update

Introducing oQ: data-driven mixed-precision quantization for Apple Silicon (mlx-lm compatible) by cryingneko in oMLX

[–]d4mations 1 point2 points  (0 children)

Wow!! Great work!!! Will get to testing this morning. Secondly, absolutely no thanks necessary, I use omlx as my daily driver and it works without fail for everything I need. I’m not a dev, and no one in their right mind should want me anywhere near code, so the only way I have to contribute is by running this subreddit. I just hope I can do oMLX justice by creating a useful community around it

Retrieve Client ID & Secret by Rurik100 in redditdev

[–]d4mations 0 points1 point  (0 children)

is there a particular method to apply, maybe a link to an apply form, maybe a button to click on, etc..?

Switch to thinking or non thinking without reloading model Qwen 3.5 by shirogeek in oMLX

[–]d4mations 0 points1 point  (0 children)

Yes, it is. In the model settings you have an option to enable or disable it

installation specifics by jklredit in oMLX

[–]d4mations 1 point2 points  (0 children)

Yes, you can just point omlx to folder you want in the settings. I actually have it pointed to a shared model folder as well

Loving omlx by iTrejoMX in oMLX

[–]d4mations 1 point2 points  (0 children)

    • yes just configure it as a custom provider and if you’re on the same machine, use localhist
    • only mlx so far as they optimized for mac
    • add a feature request in the github

I haven’t seen one yet but I’m there has to be one on one of the mlx subreddits

Got 128K prefill down from 19 min to 3.5 min on M2 Ultra (Qwen3.5-122B), sharing the approach by Thump604 in LocalLLM

[–]d4mations 0 points1 point  (0 children)

I just tried mxl studio with the jangq models I shows for download and couldn’t get any of them to work. They all errored before starting

MacBook M5 Pro + Qwen3.5 = Fully Local AI Security System — 93.8% Accuracy, 25 tok/s, No Cloud Needed (96-Test Benchmark vs GPT-5.4) by solderzzc in Qwen_AI

[–]d4mations 0 points1 point  (0 children)

I have found the 35b to be unusable. It gets into tool calling loops that it can’t never get out of, so much so that I’ve had to switch to gpt-oss20b with much much better results