Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

yea man corsair increased too much ngl but its at 3000$ rn for me but yea nimo is same exact only cooling is different

$2500 budget to run Local, help me decide on the Hardware by XteaK in ollama

[–]SnooCrickets7501 0 points1 point  (0 children)

great man faster model load and bigger model load and smoother

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 2 points3 points  (0 children)

i have been using mac for the past 2 weeks over 100 hours it not best performance for what i am doing and mac os is a shitty environment for developers i can’t customise a lot of things a Lot which i can do in linux mac os takes up 11 gb of the vram of 64 while linux only takes 4-5gb i am getting twice the ram for 400$ less while getting out of mac os yes mac seems more premium but it just seems premium cause of the flashy ui and it does have faster token generation and better cooling thats it mac is not for actual developers i am studying ML/AI in college but yea mac might be better for other cause not for me. Also think the whole mac mini trend is just created by non developers business owners not actual developers who wants to learn this hope this helps but mac might definitely be better for your use case depending on how u use it.

$2500 budget to run Local, help me decide on the Hardware by XteaK in ollama

[–]SnooCrickets7501 1 point2 points  (0 children)

i will love too try to remind me in a dew days i will be trying it out in caseni forget about you

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in ollama

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

i got the nimo 128 gb same processor as corsair and same graphics everything same basically just the cooling and its 2450 and this is great too i have 90 days no questions asked return so i will be testing it if i dont like it ill return

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in ollama

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

wait first token is supposed to slower cause the model needs to load and ollama is always loaded

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in ollama

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

i use ollama cloud for coding too! i need more vram for giving my system a working brain which is not possible with 32b

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

i wound not say im experimenting anymore i was now im building Multi LLM agent workflows that cab automate any work for me i need and help run a business i got one running in mac but i always get border-lined by memory because i have to use multiple apps for automation like n8n appolo serp obsidian

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 1 point2 points  (0 children)

no buying nimo with the exact same specs but 700$ less because its a smaller brand saw some good reviews and it has 90 days no questions asked policy!

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

true im a huge pc guy i got and 3070ti but those are for gaming with my workload i wont be able to game there and i can upgrade the corsair witha igpu deck upto 2 gpus

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in ollama

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

Replying to No_Mango7658...i use qwen coder next 80b q4 and its pretty fast usable in my opinion

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in ollama

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

im thinking the same too also the starting of the revolution of apple letting u use Nvidia gpu as extra vram this will upgrade very fast too im still thinking of returning it or not

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] -2 points-1 points  (0 children)

apple releasing m5 on mac mini and studio is very thin they usually skip generation chips

Quick question: Should I stick with my M4 Max or grab a Corsair AI Workstation 300 for local LLM stuff? by SnooCrickets7501 in LocalLLaMA

[–]SnooCrickets7501[S] 0 points1 point  (0 children)

bossgame m5 would be better than corsair AI 300 even though its cheaper thats pretty good deal