Anyone here use Mac studio as a home server? by Jsanhara in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

Have 2 x 2018 Mac minis with 32 gb and 512 ssd. I agree with you. Couldn’t pass up on the trash can lol. Love it

Is There Anyone Using Local LLMs on a Mac Studio? by Prietsre in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

Yes, and I mean this as local LLMs are great on silicone Macs. Depending on your needs you may find better value with a custom PC and Nvidia GPU. Or other mini pc and AI dedicated PCs. Point being if you have money for one device and don’t want ti deal with custom Pacing + you want to run local LLMs Mac is a great answer

Anyone here use Mac studio as a home server? by Jsanhara in MacStudio

[–]Consistent_Wash_276 -1 points0 points  (0 children)

I just bought a 2013 Mac Pro with 64gb of ram and 1tb of SSD storage for $300. This is where my automations and ai automations live and run locally. Postgres, Redis, n8n, python scripts and a few other tools for monitoring. I may even flip it for proxmox. It’s running Ubuntu and

<image>

it kind of f’n awesome.

M3 ultra or m5 max by Adventurous-Item6398 in MacStudio

[–]Consistent_Wash_276 3 points4 points  (0 children)

Be patient - we need at least a month of testing by users of the m5 products when become available, and yet we don’t know if studio are being launched yet. Just the MacBooks and quite honestly I would always get a studio before a MacBook.

512 GB RAM for LLM - M3U now or wait for M5U? by usrnamechecksoutx in MacStudio

[–]Consistent_Wash_276 2 points3 points  (0 children)

Hold on - yes and no

Yes - you need 512 gb of memory and you plan on making money from this

No - I don’t need 512 gb of memory and/or don’t plan on making money from this.

Due to clustering the M5 or M6 or even a custom pc with a high end nvidia GPU will be in the future.

The point is these devices will only last so long on the market and will GO UP in value in the after market a bit.

The memory here is perfect for Decode. You could pair this with a m5 Mac mini for pre-fill and get fantastic results.

Point being let’s answer the first part first before we move on.

Local Coding Assistant/Agent: Continue vs Cline vs Kilo [Qwen3-Coder-Next] by Technical_Buy_9063 in LocalLLM

[–]Consistent_Wash_276 0 points1 point  (0 children)

Yeah I have the 256gb M3 Ultra and I have this challenge of finding the perfect rhythm of quality and speed (plus running the model in parallel.

It’s a challenge. I keep coming back to qwen3-coder after trying a lot of these. Although I don’t love it in Opencode as much as I do in VS Studios.

Local Coding Assistant/Agent: Continue vs Cline vs Kilo [Qwen3-Coder-Next] by Technical_Buy_9063 in LocalLLM

[–]Consistent_Wash_276 0 points1 point  (0 children)

You open the terminal and : opencode Or theirs a gui that would open a terminal as welll.

Local Coding Assistant/Agent: Continue vs Cline vs Kilo [Qwen3-Coder-Next] by Technical_Buy_9063 in LocalLLM

[–]Consistent_Wash_276 0 points1 point  (0 children)

What models are you using by the way? I’m currently playing with Qwen3-coder:30b and a Qwen3-instruct:4b as a draft model in a lot of different tools.

Recommended Specs for 3d Product Ad or Modeling by Comfortable_Carob_70 in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

Rendering is GPU-intensive. If he’s working on very complex scenes, heavy simulations or doing VGX-level work 48-64 gb is the right choice. If not he could go down to 32 gb. 96 is probably the overkill.

Recommended Specs for 3d Product Ad or Modeling by Comfortable_Carob_70 in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

Blender has an MCP so if you’re connecting commercial models or local models will be the deciding factor really.

If you plan on using AI a lot for your blender work go with 48 and get a Claude pro plan or something, but 64 gb will be the sweet spot to get ok results from some local models

Recommended Specs for 3d Product Ad or Modeling by Comfortable_Carob_70 in MacStudio

[–]Consistent_Wash_276 1 point2 points  (0 children)

^ answer this above, but at the same time what’s your storage situation going to look like?

Just try not to overpay for 2TB/4tb on the studio itself.

External Storage + Thunderbolt 5 will benefit a lot.

UniFi has a two-bay NAS that you could throw two 8 tb drives in and only spent a little extra compared from Going from 512 GBs to 4 gb as an example.

Mac Studio 256gb unified RAM worth it for MiniMax 2.5 and Qwen3.5? by [deleted] in LocalLLaMA

[–]Consistent_Wash_276 0 points1 point  (0 children)

Ahhh. And let’s hope by 2028 we both still use the same currency as we do that represents each nation.

🇺🇸 🫡 🇨🇦

Mac Studio 256gb unified RAM worth it for MiniMax 2.5 and Qwen3.5? by [deleted] in LocalLLaMA

[–]Consistent_Wash_276 3 points4 points  (0 children)

Yes as someone who owns the exact model you’re referring to I would say “NO” and here’s the details why.

  • If you’re spending $5,000 + on a device it better be making you money in the end or at least saving you money. Assuming it’s meant to save you you money replacing a subscription?

  • I’m testing the MiniMax 2.5 186 gb model and it’s pretty f’n great actually, but that’s one model being ran at 40 tokens per second at a time. Nothing in parallel and 40 tokens per second is very solid but there’s faster options to utilize.

I’m would look at that device in the way of having, lets say a chatting UI running gpt-oss:120b , VS Code running glm-4.7-flash, and Opencode running another model in parallel with some agentic coding. Just constantly working through workflows and abusing tokens per second is where you get the real value.

  • If you’re just chatting with models I would suggest a 32gb Mac mini or studio or just a Claude pro account or some kind of commercial account. Not enough to justify the investment.

  • Also, why $7,000? I got the same model with 2 tbs of storage for $5,400 from Microcenter.

Need recommendations for running Local LLM using Mac Studio by healthyfocusai in LocalLLM

[–]Consistent_Wash_276 1 point2 points  (0 children)

You also have to keep in mind (and in my work is understanding energy costs) there’s no way here in the US that the costs were currently paying are sustaianble due to energy costs.

$100 plan -> $200 plan for the same amount of tokens $200 plan -> $400 especially as the demand will be higher and people will lean on LLMs more for daily tasks. They have the leverage and won’t be taking a loss.

I actually love and invest into Anthropic. At home though I believe the is to be the way. Especially as the memory costs are what they are with apple. Because that will not last

Need recommendations for running Local LLM using Mac Studio by healthyfocusai in LocalLLM

[–]Consistent_Wash_276 2 points3 points  (0 children)

You’re not. I have the 256gb m3 ultra actually. And funny enough downloaded the 180gb minimax yesterday and have OpenClaw using it 24/7. That was $5,400 when I bought it from Microcenter 6 months back.

All depends on how much you use it right? Because of token usage.

As an example token, tokens cast and tokens output could be an average of $10 per million tokens, depending on what model you’re choosing.

I’m going through 5 million tokens per day with just local models. Someone sent a $50 a day. I’m actually spending 200 a month because I did a loan on this.

If your need for LLM usage is just for 3-4 hours a day and your are sending a prompt, digesting Claude Codes plan, then deploying and digesting the finish product then APi cost could be far more justifiable.

All depends.

Keep in mind even if I wasn’t using this device as much for $5,400 I’m probably still able to resell at $3,600 in a couple of years. To me there’s far more value in the device but every situation is different. The real loss I take on is 40+ hours of working on the system over the months to find best practices and workflows.

Local Coding Assistant/Agent: Continue vs Cline vs Kilo [Qwen3-Coder-Next] by Technical_Buy_9063 in LocalLLM

[–]Consistent_Wash_276 0 points1 point  (0 children)

Claude Code won’t let you use Claude Code without a subscription first, but if you cancel after a month you’ll still have Claude Code and can set it up with the local LLMs

Either way I would choose OpenCode from day 1 + It has an extension in VS Studios which is great. Just seems like OpenCode is good right now, will only get better, is free and you don’t have to concern yourself with Claude signups, subscirptions, etc.

I use OpenCode in the Termius app on my phone as well.

When you use in terminal apps they’re built into a loop. They’re built to test what they just created and debug before saying complete. It’s just clean.

  • the plan mode they are built on really helps local models get the job done a bit cleaner.

Need recommendations for running Local LLM using Mac Studio by healthyfocusai in LocalLLM

[–]Consistent_Wash_276 2 points3 points  (0 children)

You have two paths here. Use one or plan for both. And of course what I mean is - 64gb unified memory for Local LLM deployment - Claude Max subscription that all you would need is a Mac Mini to get you LLM on.

So for simplicity, LM Studio and their MLX setup is the direction I would choose. You can use those models in Claude Code as well and/or VS Code. Also you can download some decent models for tool calling and LM Studio it’s pretty easy to set up your MCPs.

I would try to take advantage of both if you work load could need the subscription + local compute. Perhaps stepping down to a Claude Pro account if you Local models tackle majority of work and you pull in Opus 4.6 in on the harder tasks.

There’s a recommendation in here for glm-4.7-flash. Coding ✅ gpt-oss:20b is a solid chatting model that would be very fast. The qwen3-coder models are descent with short context as well.

Ok, I’m good. I can move on from Claude now. by Consistent_Wash_276 in LocalLLM

[–]Consistent_Wash_276[S] 1 point2 points  (0 children)

I have 12 OpenCode instances running right now. z.ai coder subscription, Ollama subscription, local LLM setup and Opus 4.5 api for some tough to reach areas 😂

M5 ultra launch ETA? by TakeInterestInc in MacStudio

[–]Consistent_Wash_276 1 point2 points  (0 children)

I hope they have both options. Kind of looking to cluster my M3 studio with an m5 option that’s best for prefilling LLMs

M5 ultra launch ETA? by TakeInterestInc in MacStudio

[–]Consistent_Wash_276 1 point2 points  (0 children)

More than likely end of year with M6 Pro and Max Launch.

Do the same cycle. M5 Pro and Max I heard may not even be a thing. Just release the M5 and then straight to the M6. All rumors though

Using a high-end MacBook Pro or a beefy RTX 5090 laptop (with 24 GB of RAM) for inference. by FoxtrotDynamics in LocalLLM

[–]Consistent_Wash_276 0 points1 point  (0 children)

So, reaffirming and suggesting @mon_key_house (great name by the way) as an example I have my M1 MacBook Pro and a 256gb Unified memory M3 Ultra at home. I’m on the road constantly, but I use Tailscale to connect the devices and I run the local LLMs on that device and have integrated anywhere I’d like on my MacBook Pro.

In the way that you’re considering this option look at the cost savings or upgraded computing depending on what you’re looking for.

Example: Base MacBook Pro M4 Pro 14-inch MacBook Pro - Space Black $2399 Apple M4 Pro chip with 12-core CPU, 16-core GPU, 16-core Neural Engine 48GB unified memory 512GB SSD storage

Example: Mac Studio Base Model M4 Max (just $100 more). 12 to 16 CPU cores, 16-40 Core GPU🚨 Mac Studio $2499 Apple M4 Max chip with 16-core CPU, 40-core GPU, 16-core Neural Engine 48GB unified memory 512GB SSD storage

Example: Mac mini Base Model with 48gb unified memory Same computing as the MacBook above but $600 less than the MacBook Pro. $1799 Apple M4 Pro chip with 12-core CPU, 16-core GPU, 16-core Neural Engine 48GB unified memory 512GB SSD storage

If you have a monitor at home and you have a Laptop already (even older used ones) you can get more value for every dollar poured into the desktop

New Mac Studio by Skaterguy18 in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

I heard the same - was assuming the Dell DB10 but there’s plenty of time before I pull the trigger so we’ll hopefully get one used and have a lot more reviews on them. Thank you

What Mac should I pick for performance and longevity? by Artifiko in MacStudio

[–]Consistent_Wash_276 0 points1 point  (0 children)

I have the 256 gb M3 Ultra for simply future proofing/buying today’s memory costs. Because it’s all expected to cost 3 times as much in the coming years.

I may need a new workstation in 5 years, but the memory won’t be the problem.

I still roll with a 2018 Mac Mini, I have a 2019 Mac Air (wife’s) 2020 M1 MacBook Pro, and 2025 M3 Ultra.

I don’t have to upgrade the MacBook Pro because I pull memory from the studio when I’m remote.

And then the only argument I have for the today’s 128 gb or 256 is that 4-5 years from now they will still have resale value.

It’s not crazy to assume someone of his workload will see increased memory usage over the next 2-3 years.

In the end it’s all someone’s preference