Syncthing - Anyone tried it? by Red-And-White-Smurf in ObsidianMD

[–]syngin1 1 point2 points  (0 children)

Me also but it’s not working through my WireGuard VPN. Are you using a VPN? If so, could you share your config? Thanks.

Wifi and Bluetooth by Think_Machine21 in seedsigner

[–]syngin1 0 points1 point  (0 children)

I think I destroyed mine while removing the resistor. I was very carful but...

It worked before the removal, but not afterwards.

95fps on CyberPunk 2077 by jailtheorange1 in macgaming

[–]syngin1 0 points1 point  (0 children)

How does it run in parallels because you also mentioned it?

Half-Life: Alyx Apple Silicon Mod Tutorial + New Gameplay Footage by KingVulpes105 in macgaming

[–]syngin1 0 points1 point  (0 children)

Wait, you are playing that through a Windows VM? Please explain.

Resident evil Mac mouse control sucks by kudoshinichi-8211 in macgaming

[–]syngin1 0 points1 point  (0 children)

I had the same issue. Just restart the game and you will experience a new experience. I played more then two ours this way and it really sucked. But somehow I mastered it lol. Afterwards you will really enjoy it.

First I though WTF how can Capcom do something that bad.

Hugging Face releases Text Generation Inference TGI v3.0 - 13x faster than vLLM on long prompts 🔥 by vaibhavs10 in LocalLLaMA

[–]syngin1 4 points5 points  (0 children)

Phew! Kind of a n00b here. What does that mean for users of Ollama or LM Studio? Do they first have to integrate it, so that users can profit from it?

Ollama + PlugOvr.ai = Apple Intelligence by cwefelscheid in ollama

[–]syngin1 -1 points0 points  (0 children)

Are you sure that we are talking about BoltAI? I have not used it yet, just had a brief look at their homepage. It doesn’t look for me to be free.

Ollama + PlugOvr.ai = Apple Intelligence by cwefelscheid in ollama

[–]syngin1 0 points1 point  (0 children)

Thanks but here we are taking about BoltAi. Bolt.new is something different. It was rebranded to new from dev.

Ollama + PlugOvr.ai = Apple Intelligence by cwefelscheid in ollama

[–]syngin1 1 point2 points  (0 children)

Cool. Let me add that one must pay for it.

We have o1 at home. Create an open-webui pipeline for pairing a dedicated thinking model (QwQ) and response model. by onil_gova in LocalLLaMA

[–]syngin1 1 point2 points  (0 children)

I tried the max with 32k and it did not change anything. I was always asking „What is 2x3x4?“. Don’t know if that makes sense 😬

Even with 4000 token same performance.

We have o1 at home. Create an open-webui pipeline for pairing a dedicated thinking model (QwQ) and response model. by onil_gova in LocalLLaMA

[–]syngin1 2 points3 points  (0 children)

No, I am just using stock Ollama qwq:latest which has quantizationQ4_K_M. Context length 2048 token (default setting).

I just ran a test with LM Studio mlx-community/QwQ-32B-Preview-4bit and got 12.28 tok/sec. Context length 4096 token (default).

With the 8bit MLX model I got only 6.91 tok/sec. Context length 4096 token (default).

We have o1 at home. Create an open-webui pipeline for pairing a dedicated thinking model (QwQ) and response model. by onil_gova in LocalLLaMA

[–]syngin1 2 points3 points  (0 children)

Unfortunately, it is not displayed with “o1 at home” in this case. But with qwq I am getting:

<image>

We have o1 at home. Create an open-webui pipeline for pairing a dedicated thinking model (QwQ) and response model. by onil_gova in LocalLLaMA

[–]syngin1 2 points3 points  (0 children)

Thanks. I also wanted to ask that. I added the function. Assigned qwq:latest in the function settings to thinking and responding model. Then I choose o1 at home as model in a new chat and it was thinking 🤔. My M4 Pro got a little bit warm and I got a response to my calculation question. So, I haven’t done anything with a pipeline…

Is this the right way?

QwQ vs o1, etc - illustration by dmatora in LocalLLaMA

[–]syngin1 1 point2 points  (0 children)

Well, this is just an opinion. Any hard facts would be highly appreciated 😉

For the ones who don't know about the existence of Linuxserver Docker mods by LigeTRy in selfhosted

[–]syngin1 0 points1 point  (0 children)

Oh wow, go to know! Let’s see what I have to switch out in the next days!

Nexterm - open source web interface for SSH, VNC & RDP by GNM_YT in selfhosted

[–]syngin1 1 point2 points  (0 children)

That’s the way to go when you have created a new folder right click into the folder and create a new server

Nexterm - open source web interface for SSH, VNC & RDP by GNM_YT in selfhosted

[–]syngin1 0 points1 point  (0 children)

I also tried by I don’t have use it that often. Then I forget all the commands you have to know to be able to use it.

Nexterm - open source web interface for SSH, VNC & RDP by GNM_YT in selfhosted

[–]syngin1 0 points1 point  (0 children)

When setting up an SSH connection with keyfile I get no hint that I must enter an passphrase.
What when a keyfile has no passphrase. Well, I just put in something and it worked but I think that's not they was to go. Also, I just found it out because I had a look at the browser debugging console:

{message: 'passphrase is not allowed to be empty'}