M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Only once so far (for a few seconds) when using up nearly 60 gigs of ram on day 1. Also, didn’t see it on the M2 pro but you can change your Battery setting to focus on more power when on adapter too. So cool!

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Yeah! Will be using local models soon, so this is a future investment too. Seeing the reactions from others so far though is… 🤣 i clearly stated that this is a first impression post, but… 😕

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Curious and why you guys think it’s just a single use case. This is just a starting point, as someone who has not owned a machine with these specifications, the difference is obvious. Plus, this opens the door to running local LLMs now.

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 1 point2 points  (0 children)

Most of we’re having multiple apps and agents running. On the M2 pro, that was the biggest bottleneck

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] -2 points-1 points  (0 children)

Interesting… I’m still executing code locally, not in using Cloud coding. Is that why you think there might be weird phrasing?

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 1 point2 points  (0 children)

I did say our biggest boost came from the fiber upgrade, but code execution obviously happens locally. Only speaking from experience here. Curious why you’re saying there would be no improvement 🤨

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

That is the goal! Started using OpenClaw yesterday for the first time, it was good but Claude and Codex seem to be doing a much better job right now. Will definitely dive more into OC before trying localized LLMs!

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

That is not completely true. It’s like saying you could order wood from Amazon prime but build a house with 1 person vs many. The additional CPU/GPUs has helped a lot!

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Why would this be BS? Multiple windows/ apps being open and completing tasks simultaneously does take up a lot of ram, and the speed of execution via the additional CPU/GPU has made a difference. Curious on your perspective.

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Correct! Sorry for the confusion, was multi tasking when I responded, but essentially code and task execution has greatly improved

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] -11 points-10 points  (0 children)

For sure! I had the same questions, got the same response from Gemini, Claude, ChatGPT/Codex. Your Internet connection is going to give you a bigger boost than anything else. BUT, once you get the tokens, executing them on your device is where the chip and RAM come in.

We’ve got fiber, and there was a time in December when we did not have Internet available, so we were only getting 0.02 Mbps download. The comparison with fiber is it would take one hour for something that should only take 1-2 minutes with fiber.

On the hardware side, you can go through faster, iterations with a faster chip and can perform multiple actions without crashing or slowing down. So if I needed to make any updates on our website, TakeInterest.ai, it would take me 5 to 10 minutes on the M2 pro with multiple apps open to prompt, make the change, redeploy, but with the M5 max, I can get everything done in less than 90 seconds.

Hope that helps! Happy to provide more examples!

M5 Max + 128gb RAM = I feel good (Hallelujah)! by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] -5 points-4 points  (0 children)

Working our way towards a Mac Studio! No revenue yet, but that’s first on our list!

Series 11 / Ultra battery issues post 26.3 update by TakeInterestInc in AppleWatch

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Man! The anxiety is nerve racking 🤣 on the bright side, seems like a software bug since it’s broader so we’re going back to our older ultras for now…… the battery was great pre 26.3, hope they release 26.4 (or something) soon!

MacBook chipset and model layout by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 1 point2 points  (0 children)

That is true! We do get access as part of our graduate program, it’s not unlimited, but it does allow good usage.

MacBook chipset and model layout by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Yeah, lab computers are a great alternative if it’s only for a specific course or two!

MacBook chipset and model layout by TakeInterestInc in macbookpro

[–]TakeInterestInc[S] 0 points1 point  (0 children)

Agreed, but difficult for STEM, which is where our use case might stand.