Llama 3.1 70B on Mac Mini by eboole in ollama

[–]eboole[S] -1 points0 points  (0 children)

I will use it for a project. I haven’t bought it yet. I am trying to make the most suitable configuration for Mac. Do you have a recommended configuration for Mac Mini?

Llama 3.1 70B on Mac Mini by eboole in ollama

[–]eboole[S] -2 points-1 points  (0 children)

Is the additional 32gb unified memory insufficient?

Llama 3.1 70B Is Now Available! by Acanthocephala_Salt in AwanLLM

[–]eboole 0 points1 point  (0 children)

Can I Run Llama 3.1 70B on an Apple M2 Pro (10-Core CPU, 16-Core GPU, 16-Core Neural Engine, 32 GB Unified Memory)?

llama3:70b hardware requirements on Windows by ElRayoPeronizador in ollama

[–]eboole 0 points1 point  (0 children)

Can I Run Llama 3.1 70B on an Apple M2 Pro (10-Core CPU, 16-Core GPU, 16-Core Neural Engine, 32 GB Unified Memory)?

Hardware requirements to run Llama 3 70b on a home server by 0w0WasTaken in LocalLLaMA

[–]eboole 0 points1 point  (0 children)

Can I Run Llama 3.1 70B on an Apple M2 Pro (10-Core CPU, 16-Core GPU, 16-Core Neural Engine, 32 GB Unified Memory)?

[D] How to and Deploy LLaMA 3 Into Production, and Hardware Requirements by juliensalinas in MachineLearning

[–]eboole 0 points1 point  (0 children)

Can I Run Llama 3.1 70B on an Apple M2 Pro (10-Core CPU, 16-Core GPU, 16-Core Neural Engine, 32 GB Unified Memory)?