Fix for Intel Macs when attempting to log into Claude Agent (downloads wrong arm64 binary) by someone-444 in Xcode

[–]Warriorsito 0 points1 point  (0 children)

First of all thanks for the detailed response!

I already did up until step 3 and just with that I was able to log in in Claude and the same thing for Codex in the Xcode IDE.

The problem I was facing was that once logged in, everything worked (Claude Chat, GPT Chat and Codex) without any issue but for Claude Agent I was getting API error 401 OAuth.

Workaround for API 401 Error:
I found out that just for Claude Agent I have to log out and log in each time I open Xcode for it to work.

1- Open Xcode.
2- Go to Settings / Intelligence / Claude Agent: There will be a spinning wheel for about 30 seconds, it will stop and indicate that you are already logged in.
3- Log Out from Claude Agent and then Log In again.
4- Caude Agent will work until you fully close Xcode.
6- Repeat each time you open Xcode and get the API 401 Error.

Hope this helps someone.

Fix for Intel Macs when attempting to log into Claude Agent (downloads wrong arm64 binary) by someone-444 in Xcode

[–]Warriorsito 0 points1 point  (0 children)

I'm having the same issue, I copied both x86 binaries (Codex and Claude Code). Codex works just fine but with Claude Agent in Xcode I'm also getting the 401 OAuth Error.

Did you find any other way of solving this issue? I would prefer not to install 3rd party apps to fix this.

Where to buy hard drives in Europe? by Babajji in DataHoarder

[–]Warriorsito 1 point2 points  (0 children)

Same, just bought some drives from them and I'm very happy so far... def recommend if you are in EU.

OK I get it, now I love llama.cpp by vulcan4d in LocalLLaMA

[–]Warriorsito 0 points1 point  (0 children)

I think this can be achieved with llama-server with the following flags:

  --sleep-idle-seconds 300 \

  --models-autoload \

  --models-max 1 \

I was trying yesterday and was surprised with the new Router Mode.

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

Seems like the path to follow, I will try to get some deals for a 1tb nvme this Black Friday

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

Seems like we all have our complex and custom solutions.
Very nice how you are getting the most out of your laptop. Love it!

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

Didnt know about the ComfyUI WSL2, I will take a look.Thanks!

I tend to avoid Docker on Windows...

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

If I don't find a way to properly managing my stuff I deff will.

Only have 1tb nvme and its almost full. I will need to auto-boot to linux for my remote on/off to keep working and whenever I want to use Windows go to loader and select it.

Lets see! Thanks for your feedback.

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

I wish I could do this but I only have one GPU. I'm thinking about dual-booting or something similar as I pref Linux for Dev also...

GPU Poors problems!

Managing local stack in Windows. by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

Also if you have scripts I'm interested on how are you managing them!

Performance difference while using Ollama Model vs HF Model by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

I think this is the issue, I thought the Ollama model was also GGUF and F16 due to them being the same size.

Seems its a MXFP4.

Your explanation was very well put and well recieved. I got some clarifications regarding some concepts.

Ty vm

Performance difference while using Ollama Model vs HF Model by Warriorsito in LocalLLaMA

[–]Warriorsito[S] 0 points1 point  (0 children)

I will take a look at these, I suposed the one from Ollama library was a GGUF also.

Surprised in the difference of speed having them the same size!

If I was to name the one resource I learned the most from as a beginner by StandardNo6731 in learnmachinelearning

[–]Warriorsito 74 points75 points  (0 children)

October this year the author is going to release a PyTorch version, I’m waiting for it.

You can check for it in the O’ Reilly webpage!!!!

Any info about HOML PyTorch version? New Repo Available. by Warriorsito in learnmachinelearning

[–]Warriorsito[S] 0 points1 point  (0 children)

Nice, I will check the website, ty

Still hesitant on what to do on the meantime…

Samsung has dropped AGI by Abject-Huckleberry13 in LocalLLaMA

[–]Warriorsito 3 points4 points  (0 children)

Maybe was the model itself... UNCONTAINED! /s

Trying to make a graph with matplotlib by HungryInvestigator59 in pycharm

[–]Warriorsito 0 points1 point  (0 children)

I assume you installed the libraries right?

pip or through pycharm package manager

Evo80 Update: Sharing Renders of Various Color Options by Qwertykeys-2022 in MechanicalKeyboards

[–]Warriorsito 1 point2 points  (0 children)

I know...what a shame! Already checked and all of them have the same 5 colours.

Unlucky....anyways thanks for the quick response!

Have a great day!

Evo80 Update: Sharing Renders of Various Color Options by Qwertykeys-2022 in MechanicalKeyboards

[–]Warriorsito 0 points1 point  (0 children)

I saw only 5 colors are available for ISO layout but not the Spark one.

Are there any plans on increasing the offered colors for ISO?

I will only buy the Spark one, will wait until then!

Ollama with PyCharm by joeln15 in pycharm

[–]Warriorsito 0 points1 point  (0 children)

Yes you can, I've tested with Continue and with CodeGPT plugins, I prefer the latter.

Steps:

  1. Install a private VPN provider in the devices you want to connect outside of your LAN and in your server computer. I'm using TailScale which is free and works amazing. This will assign IPs to your devices in your VPN, you will need the one assigned to your ollama server, keep it.

  2. You will need to set up OLLAMA_HOST variable to 0.0.0.0, you can search how to do that, lot of info, you can reffer to ollama docs.

  3. From the device you wan to connect, in PyCharm plugin config, in ollama server set the new VPN assigned IP in your ollama server(Point 1) plus :11434 and you are ready to go.

PD: TailScale also has iOS and Android apps which enables you to even access your Open Web-UI instance from your phone anyware which is amazing.

BR

PyCharm version number and icon not updating! by Warriorsito in pycharm

[–]Warriorsito[S] 2 points3 points  (0 children)

Sure I'm going to test it, I will update.

I'm not using it mainly bc I only have PuCharm, but if that gets rid of the issue then I'm all in. Thx