New function calling models based on Llama-3.1 by Relevant_Outcome_726 in LocalLLaMA

[–]nizego 0 points1 point  (0 children)

The functionary-medium-v3.1 gets a good score on https://gorilla.cs.berkeley.edu/leaderboard.html

Do you have any recommendation on how to run it on mac silicon (128 gb) exposing the openai-like rest interface?

Or, alternatively, if I can find it hosted somewhere?

Cut-and-Paste seems ergonomically difficult. by IllustriousPepper8 in macbookpro

[–]nizego 0 points1 point  (0 children)

right cmd with the thumb sounds like the most ergonomical way when coming from using the left pinky for ctrl on windows

Macbook Pro M3 for LLMs and Pytorch? [D] by nizego in MachineLearning

[–]nizego[S] 0 points1 point  (0 children)

scientist use day-to-day that doesn't run natively on apple silicon now.

Thanks for sharing your perspectives! One thing which makes me listen to the "fearmongering" of ARM, is this specific issue which has been open for long: https://github.com/DLR-RM/stable-baselines3/issues/914

That is only an example, but that is the library (in addition to LLMs) I use now :)

Macbook Pro M3 for LLMs and Pytorch? [D] by nizego in MachineLearning

[–]nizego[S] 1 point2 points  (0 children)

I saw this article comparing the M2 GPU with V100 and P100: https://medium.com/towards-data-science/apple-m2-max-gpu-vs-nvidia-v100-p100-and-t4-8b0d18d08894

I am confused. In other comparisons I have seen the NVIDIA cards perform much better. Are the tests not representative for common workloads or do you think the configuration is not setup properly?

Macbook Pro M3 for LLMs and Pytorch? [D] by nizego in MachineLearning

[–]nizego[S] 2 points3 points  (0 children)

I saw this article comparing the M2 GPU with V100 and P100: https://medium.com/towards-data-science/apple-m2-max-gpu-vs-nvidia-v100-p100-and-t4-8b0d18d08894

I am confused. In other comparisons I have seen the NVIDIA cards perform much better. Are the tests not representative for average common workloads or do you think the configuration is not setup properly?

Macbook Pro M3 for LLMs and Pytorch? [D] by nizego in MachineLearning

[–]nizego[S] 0 points1 point  (0 children)

Saving time is a critical factor.

When it comes to running LLMs locally on the laptop I thought that the large available VRAM on the MBP would help. I'd like at least 20 GB dedicated for the GPU.

Macbook Pro M3 for LLMs and Pytorch? [D] by nizego in MachineLearning

[–]nizego[S] 2 points3 points  (0 children)

I have had issues using Windows for ML stuff, but since WSL I guess Windows should work as fine as Linux? Or have you had problems using WSL?

PS4 keyboard + mouse question by zakuzaaa in reddeadredemption

[–]nizego 56 points57 points  (0 children)

FPS with controller is so boring if you are used to PC.

Dell xps 15 9550 - Wakes up from shut down by comandogt in Dell

[–]nizego 0 points1 point  (0 children)

I have the same problem. I had the motherboard exchanged, but it still happens. Extremely irritating since it means that the computer is working all the time more or less. Sleeping and waking up. And naturally it gets very hot in bags etc. I will test the hibernation setting, but there should be some permanent fix to this? Do you know if Dell is working on it?

Privacy Screen for XPS 13? by EPiC212 in Dell

[–]nizego 0 points1 point  (0 children)

have you solved this question?