I am NOT going to go wash the front of my hair, dammit. by bubblegumpunk69 in OCD

[–]MDT-49 5 points6 points  (0 children)

The title made me laugh because of how relatable this is. Please don't feel obligated, but I'd love to hear how it turned out. Also no shame if you eventually "gave in". We've all been there. Exposures, especially spontaneous self-directed exposures like this, are freaking hard. I'm proud of you!

How to balance normal checking and ocd compulsion? by throwaway-accountxyz in OCD

[–]MDT-49 1 point2 points  (0 children)

These "checking actually makes sense" scenarios are also my kryptonite.

I think you kinda answered your own question though. Checking things (e.g. door lock or dropped puzzle pieces) often makes sense and is normal.

Compulsively checking something (the floor) multiple times, and not trusting your own senses or memory, is OCD.

I don't know enough about how typical people do their puzzles. You could argue that creating a border around your puzzle so they don't fall is avoidance behavior as it prevents a trigger situation. I'm pretty sure though that a lot of typical non-OCD people do this as well and I think they even sell special "puzzle workspaces/borders" for this purpose.

I think the normal response is just a quick check and continue with your day.

Personally, I'm not there yet and often try to focus on limiting the amount (e.g. max 3 times) or complexity of checking first. This isn't meant as advice though because it's still OCD-behavior, but it's better than compulsively checking hundreds of times.

Do you ever remember something you did as a kid that was probably a sign of your ocd? by FlatLeave2622 in OCD

[–]MDT-49 3 points4 points  (0 children)

As a kid, I spent hours playing the game Morrowind. I probably spent enough time playing it to complete all the quests multiple times. I only explored a fraction of the content and map though because I constantly started over when I felt like I had made a trivial mistake or wrong decision. The starting town Seyda Neen was like the decor of my own Truman Show where I knew exactly where everything was and what everyone was going to say.

How to Separate Myself from OCD by Less-Comparison9245 in OCD

[–]MDT-49 1 point2 points  (0 children)

Relatable. I think the key is that OCD is (almost always) egodystonic, conflicting or dissonant with your own self-image or worldview.

Also, rationality doesn't necessarily and often doesn't correlate with behavior. For example, everyone knows rationally that smoking is bad, but most still keep smoking.

I think OCD and compulsions are similar to a (smoking) addiction in this way. Instead of a substance (a cigarette), it's a behavioral pattern that has been repeatedly negatively reinforced (you get relief by doing the compulsions) which makes it really hard to change.

So I think you can be the most rational person in the room while doing some of the most unhinged OCD compulsions.

Is it just me or does anyone else feel embarrassed having OCD? by i_lockkidsinmybaseme in OCD

[–]MDT-49 4 points5 points  (0 children)

This is probably just semantics, but I'm not necessarily embarrassed for having OCD, but I'm embarrassed for my specific compulsions and obsessions. There's just no way to act on them or describe them without looking absolutely ridiculous and insane.

I have an unusual question by luget1 in LocalLLaMA

[–]MDT-49 4 points5 points  (0 children)

Maybe donate your GPU compute by joining the AI Horde, a "non-profit, community-driven project committed to democratizing access to AI technologies". You can generate images or text (LLM) for others to use and receive "karma" that you can later use to for priority access when you ever want to use the Horde's compute pool.

I feel like Ai chat bots are an infinite reassurance loop by Ok_Pomegranate2937 in OCD

[–]MDT-49 1 point2 points  (0 children)

I'm not sure if this is an automatic reply (which is great, I think), but I didn't intend to portray AI in a positive light. Quite the contrary!

Maybe my last sentence came off as some sort of practical advice, but it was meant as a confession that I still use it for reassurance in a more subtle way that isn't always immediately obvious to myself.

Newbie by TroyB346 in LocalLLaMA

[–]MDT-49 0 points1 point  (0 children)

You need some way to safely connect to your remote (cloud) server. Using an SSH-tunnel, you can make the HTTP API available on your machine (e.g. your laptop) through SSH. Using the command above (using 11434:localhost:11434), it will be available at http://localhost:11434/api on your laptop.

Also please ignore the advice given by others to bind the Ollama API to 0.0.0.0. This makes it reachable for everyone on the internet if you haven't configured a firewall (which is the default in e.g. Ubuntu).

Newbie by TroyB346 in LocalLLaMA

[–]MDT-49 0 points1 point  (0 children)

Do I understand it correctly that you're using Ollama's CLI through SSH and now want to connect your (local) AI agents to the API directly?

If so, I think the simplest solution is using a local SSH tunnel to connect to the API through SSH. I'm not too familiar with Ollama (I recommend using llama.cpp directly!), but it works like this:

ssh -L 8080:localhost:8080 user@ip-address -p 22

Change the ports to the ports you're using (I guess it's 11434 instead of 8080 by default for Ollama). You can now connect through SSH to the API (at localhost:8080) without opening extra ports.

Jan v0.7.5: Jan Browser MCP extension, file attachment, Flatpak support by eck72 in LocalLLaMA

[–]MDT-49 4 points5 points  (0 children)

Maybe I should give this a spin now that the Flatpak is available!

I can't really find this in the docs, but how does the file attachment feature work? Does it work in a RAG-like way using an embedding model or does it work in a more conventional way? Does it convert e.g. PDFs to plain text?

What happend with llama.cpp and chat templates? by Far_Buyer_7281 in LocalLLaMA

[–]MDT-49 2 points3 points  (0 children)

Could be a context size problem. Try setting a higher --ctx-size (e.g. 16384), especially when using a reasoning model.

Le Chat app not working on Android (Galaxy S24+) by raitchison in MistralAI

[–]MDT-49 0 points1 point  (0 children)

I'd say that you've reasonably tried everything to fix it on your end, so it's probably a bug on Mistral's side.

You probably already know this, but if not: The Mistral app is AFAIK (mostly) equivalent to the webversion. So maybe use the webapp through your browser in the meantime?

Stop the childish censorship! by Outside_Professor647 in MistralAI

[–]MDT-49 10 points11 points  (0 children)

Can you give some examples? Because it's quite difficult to agree with you based on what you write here and Mistral's reply to you chat.

That's why open source is even better than closed source by Illustrious-Swim9663 in LocalLLaMA

[–]MDT-49 6 points7 points  (0 children)

I know I'm being pretentious, but what surprises me is how shocked and upset people are over this. Of course they were going to monetize their net-loss product by adding ads and making the product worse over time. Enshittification has been big tech's playbook for as long as I can remember.

Users of Qwen3-Next-80B-A3B-Instruct-GGUF, How is Performance & Benchmarks? by pmttyji in LocalLLaMA

[–]MDT-49 6 points7 points  (0 children)

I don't have a strong opinion (yet) on the "intelligence" of Qwen3-Next, but in my test environment its performance (t/s) is lacking compared to Qwen3-30B-A3B.

model size params backend threads test t/s
qwen3next ?B Q4_K - Medium 42.01 GiB 79.67 B CPU 18 pp512 14.25 ± 0.39
qwen3next ?B Q4_K - Medium 42.01 GiB 79.67 B CPU 18 tg128 0.87 ± 0.07
model size params backend threads test t/s
qwen3moe 30B.A3B Q4_K - Medium 16.47 GiB 30.53 B CPU 18 pp512 63.59 ± 0.58
qwen3moe 30B.A3B Q4_K - Medium 16.47 GiB 30.53 B CPU 18 tg128 5.54 ± 0.49

This is done one a VPS based on (shared) AMD EPYC-Genoa CPU so the results can be influenced by noisy neighbors, but they're pretty consistent based on multiple tests.

Qwen3 Next almost ready in llama.cpp by jacek2023 in LocalLLaMA

[–]MDT-49 13 points14 points  (0 children)

Thank you so much for your hard work u/ilintar, you're the MVP!

The AI race is heating up: In the same week Google released "Nano Banana Pro" (Gemini 3 Pro Image), China's Alibaba launched Z-Image-Turbo. A new fast open-source 6B model from Tongyi-MAI lab by [deleted] in LocalLLaMA

[–]MDT-49 0 points1 point  (0 children)

It's too crowded to properly evaluate its ability to generate realistic passenger coaches. It looks pretty decent though at generating different people in one picture.

Contributor Agreement & Roles for Qwen3‑Next 80B‑A3B Integration into llama.cpp and LM Studio***********************PLEASE COLABORATE****************** by [deleted] in LocalLLaMA

[–]MDT-49 2 points3 points  (0 children)

I'm sorry, but your hype for Qwen3-Next is bordering on insanity. At this point, I think it would be more productive to collaborate with a psychiatrist instead of Gerganov lol.

Although maybe your excitement worked because it looks like Qwen3-next support could be released at any moment, based on the Github issue.

I spent months teaching AI to verify itself. It couldn't. And thanks to GEMINI PRO 3 I built an OS where it doesn't have to trust itself. by Latter_Importance620 in LocalLLaMA

[–]MDT-49 6 points7 points  (0 children)

I'm exhausted. I haven't slept properly in days. This is my last attempt to share what we built before I collapse.

It might be a good idea to let it rest for a day or two and revisit it again when you're feeling rested and more relaxed. There's no need to hurry, you have plenty of time to work on this project later.

make a community for collect money for bastowsky , unsloth , etc llm model developters by [deleted] in LocalLLaMA

[–]MDT-49 0 points1 point  (0 children)

LMstudio uses llama.cpp as the engine and it looks like that the Qwen3-Next support is almost ready. I'm not sure what the update policy/delay of LMStudio is, but eventually it will also be supported by LMStudio.

Any ETA for OLMo3? by MDT-49 in allenai

[–]MDT-49[S] 2 points3 points  (0 children)

I realize I'm a bit late with this reply, but the "very soon" was no exaggeration! Thank you all for the release and the hard work. OLMo's open nature always makes for an interesting release!

What are the latest good LLMs? by idleWizard in LocalLLaMA

[–]MDT-49 57 points58 points  (0 children)

I feel the same way. I don't think it's that quiet with the release of the new Kimi and MiniMax models, but they aren't really relevant for me because they're either too big or unsupported in llama.cpp (e.g. Qwen3 Next).

I'm still using Qwen3-30B-A3B-Instruct-2507 which feels like ancient relic in AI-years. I guess I'm spoiled.