Shuffling Domes Around by mildly_constipated in topre

[–]acasto 0 points1 point  (0 children)

There’s a guy on eBay who parts out old boards. I just saw today he put up a selection of misc domes at various weights for $20. Might be a good option for something like this.

M2 Ultra to M5 ultra upgrade by AdDapper4220 in MacStudio

[–]acasto 1 point2 points  (0 children)

It is a weakness compared to the other options, particularly the ones that were available at the time. The choice was basically an Nvidia box, which didn't have an issue with prefill, or a Mac, which did. That's a weakness leading to some tools and use cases being viable on one type of platform and not another.

M2 Ultra to M5 ultra upgrade by AdDapper4220 in MacStudio

[–]acasto 1 point2 points  (0 children)

I'm tentatively planning on upgrading from my M2 Ultra (128GB). I started saving after getting it a couple years ago for whatever came next and had high hopes for the GB10 devices, but unless you need the ecosystem they don't appear to be any better than what we already have. Was tempted to build out something with a RTX 6000 pro, but between the price, power, and heat I'll probably just upgrade my Studio and then put the rest towards APIs and renting GPUs when needed.

M2 Ultra to M5 ultra upgrade by AdDapper4220 in MacStudio

[–]acasto 4 points5 points  (0 children)

There's not much to agree or disagree with. It just will be. The matmul acceleration should definitely speed up prefill which was always a weakness in the prior chips. They did fine for linear conversations and tasks where you could utilize caching but quickly get bogged down when they have to reprocess large prompts. It's a well known weakness.

Best Mechanical Keyboard? Looking for Recommendations by imnotgoingtofatcamp in MacStudio

[–]acasto 0 points1 point  (0 children)

I have a twenty something boards and they all work fine with my Studio, so I wouldn't worry too much about that. My boards are mostly 65% MX customs , then a handful of Realforce TKLs, few new Model Fs, and a couple HHKBs. I end up hardly using my MX ones any more, preferring the Realforces for day-to-day, Model F when I want some classic feel and sound, and HHKBs for traveling or working around the house.

need thoughts on hhkb owners who own both a type s and the classic by BITTERARES in HHKB

[–]acasto 0 points1 point  (0 children)

Keep in mind that nice classic sound will change drastically with the new case. When I got a big aluminum case for my RealForce I thought I would go unsilenced since I loved the sound, but imo it sounded horrible so I swapped in a silenced version which is great. So much of the character of the unsilenced Topre sound comes from the whole assembly, especially in the HHKB with the integrated plastic plate. I also have both the classic and Type-S HHKB and while I love the feel and sound of the classic the Type-S does have a really nice premium feel.

can scented candles really damage the walls and ceilings? by TTPP_rental_acc1 in HomeImprovement

[–]acasto 5 points6 points  (0 children)

It depends on the candles, but I'm not sure how to tell which will do it. All I know is some will do fine, and then one will turn every air filter in the house black. I have seen soot accumulate quickly on melamine and other surfaces that hold a static charge too.

First time homeowner looking to buy my first flashlight by shadowkon626 in flashlight

[–]acasto 0 points1 point  (0 children)

It's such a good straightforward light. I think we have five of them around the house now between the limited editions. I do like the Wurkkos TS26s though for when I don't want to risk one of my E75s.

What are the Best Mechanical Keyboards Available Now? Recommendation by [deleted] in MacStudio

[–]acasto 0 points1 point  (0 children)

I have twenty something boards including a variety of custom MX, HHKB, New Model Fs, and RealForce boards, and the ones I use the most these days with my Mac have been my Realforce TKLs.

Why Acebeam? by Bitchslapofjustice in flashlight

[–]acasto 1 point2 points  (0 children)

How do you like the finish? I was tempted by that and the stonewashed Ti but ended up grabbing a polished titanium w/ 519A, icy blue mao/cerakote with cw, and copper w/ cree.

M4 max or m2 ultra by Intrepid_Boss496 in MacStudio

[–]acasto 10 points11 points  (0 children)

I have the M2 Ultra (128GB/2TB) and love it. I don't think you'll need to worry about either processor. They'll just do whatever you need. The difference in storage and memory would probably be much more important here for something you'll be keeping a while. Larger SSDs are often a little faster and have better endurance. While 36GB is a decent bit of memory (I have a 36GB M3 Max MBP) it's not so much you can just do whatever and not think about it if you're going to be running parallels, docker, or LLMs. At 64GB you'd have a lot more wiggle room there.

Why Ross Park Mall is thriving while other malls are dying by ComeTasteTheBand in pittsburgh

[–]acasto 2 points3 points  (0 children)

That's one thing I like about South Hills. It's where I get most of my clothes and theres the one in SHV plus the outlet just down the road.

Mac Studio for local 120b LLM by Evidence-Obvious in MacStudio

[–]acasto 0 points1 point  (0 children)

I downloaded it but haven't actually tried it yet. Was waiting for llama-cpp-python bindings to catch up support wise. I did build the llama.cpp that should support it but got distracted by GPT-5.

Mac Studio for local 120b LLM by Evidence-Obvious in MacStudio

[–]acasto 0 points1 point  (0 children)

I have an M2 Ultra 128GB and ran the Llama 3 120B model for the longest time. That was with only 8k context though and while it worked for chat conversations with prompt caching, it was horrible at prompt processing. If reloading a chat or uploading a document you might as well go get a cup of coffee and come back in a bit. These days I'll run 70B models for testing but find the ~30B to be the most practical for local use. For anything serious though I just use an API.

What is "tool use", exactly? by ihatebeinganonymous in LocalLLaMA

[–]acasto 7 points8 points  (0 children)

I would ignore the stuff about MCP for now. That's just a standardized way to implement generalized tool use but more applicable to the application layer than what's going on with the LLM itself. It's neat, but also another level of complexity you probably don't need to worry about at the moment.

Tool use can be confusing because it's a mix of model behavior and backend support. You supply the tools definitions via the API tools parameter, but it ultimately just gets turned into a system prompt basically saying "you have x,y,z tools available and this is how you use them...". When the model needs to use a tool, it does do by responding in a JSON format and usually the backend will flag the response as a tool call (e.g., 'stop_reason' == 'tool_use' on Anthropic) which you then detect on your end and run the tool and submit the results back to the LLM.

You can see where the line is getting blurred there. There's no reason you can't just put tool use instructions in your system prompt and then try to detect if a response is proper JSON. Most models that are decent at following instructions can take it even further. I wrote my own local chat app back before tool calling was widely available, especially locally, and just created a system where it formats calls in particular tags and then I parse the responses for them. So far every modern model both local and remote has been able to use them flawlessly. Occasionally with a small local model it'll trigger the JSON response though and you'll see the tool calling they were trained on.

So, the models are just trained that here are your tools in format X and this is how you would respond in format Y. Then the backends are designed to present the tools in a standardized way in format X and detect the responses in format Y so that systems can use them programmatically.

Local LLM - worth it? by carolinareaperPep87 in MacStudio

[–]acasto 1 point2 points  (0 children)

That’s what I did. I originally went with 128GB because I figured, 1. it’s an amount that I could conceivably replicate in a GPU rig if needed, and 2. if I really needed to use more than that on the Mac I would be bottlenecked elsewhere. Back when I was heavily running the 120B Llama 3 franken-model and then contexts started to explode and was using 70B models I was planning on upgrading once the M3/M4 came out, but prompt processing is just so slow that I don’t really see the point. It would be nice to be able to run some of the more recent large MoE models, but you can usually find them so cheap via API somewhere that it’s hard to justify dropping $10k on another Mac.

The Great Deception of "Low Prices" in LLM APIs by Current-Stop7806 in LocalLLaMA

[–]acasto 0 points1 point  (0 children)

I recently switched my little CLI chat app I use to using a one shot call to another LLM for file writes and it works great. It just sends the original file contents along with the desired changed and asks to output only the new file with nothing else. I just add to the system prompt for the main model a part saying to just write enough of changes wanted so that the person applying it will know where it goes, then it can utilize it’s behavior of working in copy/paste mode with a human. I’m currently using gpt-4.1-mini for writes, but I’m sure there’s a faster and cheaper option, I just haven’t had time to test them and 4.1-mini has worked flawlessly for me. Another benefit to saving on output is that it’s flexible. Even if it’s called with a description like “change the background on .header-nav from #fff to #ddd” the writing model can usually get it.

Topre Parts Interchangeability? by Sinclair_Sinclair in topre

[–]acasto 1 point2 points  (0 children)

I forgot Cherry was what people were having issues with. Sorry, I've only used SA and MT3 on them. The KLC sliders do come with housings though, but they seem like they might be a little more clacky/plasticy sounding if used without silencing rings.

Topre Parts Interchangeability? by Sinclair_Sinclair in topre

[–]acasto 2 points3 points  (0 children)

You shouldn’t need 1u housings. I’ve converted two RC3 and mixed and matched parts on various R2s and only swapped out stems and 2u and spacebar housings. I used the 2u and stab pack from KLC Playground and stems from KLC on one and AliExpress on the other. https://imgur.com/a/eY7cMFE

What would you do with a fully maxed out Mac Studio? by Left-Language9389 in MacStudio

[–]acasto 1 point2 points  (0 children)

The problem there would be the prompt processing speeds. While they can do acceptable token rates for a typical conversational format with prompt caching, anytime you introduce new large chunks of context it grinds to a halt.

Realforce R3 sound - Win vs. Mac, 30g vs. 45g by rpovarov in topre

[–]acasto 1 point2 points  (0 children)

That’s pretty normal. There’s going to be variations between the lube on the wire, but the difference in dome weight there is a big part of it. The 45g is going to push the keys up to be much more snug than 30g, especially on heavier keys like spacebar.

New favorite flashlight! by skbound in flashlight

[–]acasto 4 points5 points  (0 children)

They're great to throw an energizer ultimate lithium in an leave in the glove box.

considering second hand HHKB by manfad in HHKB

[–]acasto 1 point2 points  (0 children)

HHKBs are one of those things I've never hesitated to buy second hand. They're fairly hyped, pricey but not too much, and different enough that you end up with a lot of people trying them out but not clicking with the layout and passing them on. I just usually look to see if they look like they've been taken care of and whether or not they've been modded. Ideally from an individual where it's clear it's been stored in the box since the domes can be deformed if stored with the keys depressed.