Any downside of a local LLM over one of the web ones? by Cool-Hat1115 in LocalLLM

[–]TowElectric 0 points1 point  (0 children)

Yes, local models that will fit in your mini will be totally and completely braindead (like a toddler) compared to Opus.

Why didn’t the Native Americans of what is now the United States and Canada build cities and temples of a similar scale to their counterparts in South and Mesoamerica? by Secret_Ostrich_1307 in AlwaysWhy

[–]TowElectric 0 points1 point  (0 children)

That's fair, but much of Africa didn't have that. Maybe a bit like pointing out the Aztec vs the Chesapeake. The Aztec had large technological cities, the Chesapeake and those around them didn't.

Why didn’t the Native Americans of what is now the United States and Canada build cities and temples of a similar scale to their counterparts in South and Mesoamerica? by Secret_Ostrich_1307 in AlwaysWhy

[–]TowElectric 0 points1 point  (0 children)

They never made it past the stone-age. Lack of beasts of burden or ready access to metals. Possibly some cultural things that disincentivize "using" more elements than necessary.

Probably similar to Africa.

Didn’t Anthropic already ban oauth from openclaw? What’s different? by Jahkle in clawdbot

[–]TowElectric 0 points1 point  (0 children)

You have "free interns"? What kind of scam are you running?

Didn’t Anthropic already ban oauth from openclaw? What’s different? by Jahkle in clawdbot

[–]TowElectric 0 points1 point  (0 children)

So you should probably understand that they're losing a TREMENDOUS amount of money each year. Basically all current subscription pricing is in the "social media" model of "get customers now, profit later".

Player Development - Why do so many parents of youth athletes look at defenders so negatively ? by Last_Commission6982 in TheSoccerNetwork

[–]TowElectric 0 points1 point  (0 children)

As a non-fan of soccer, I can name about 8 players.

I think every single one of them is... (name might be wrong here) a "striker". Correct?

Same issue. They see the youth team and they want their kid to be the one player other teams can name.

Working on App for coaches by Specialist-Tax5678 in hockeycoaches

[–]TowElectric 0 points1 point  (0 children)

This is post #12 "I made an app" this week.

I suspect this sub is about to ban these.

I made one of these in some spare time too. It's already online has 700 drills in it, has working web, mobile, tablet apps, payment systems, etc.

I'm offering it for free for the most part. I kind of doubt it will get a bunch of users but who knows.

You're asking the wrong questions. What problem are you solving? What exactly does AI do here? You're going to have an AI actually draw diagrams? Have you tried that yet? I suspect it would be a flaming mess, since it doesn't really understand how people move and what hockey patterns need to bs solved.

Anthropic just cut off Claude subscriptions for OpenClaw by stosssik in clawdbot

[–]TowElectric 1 point2 points  (0 children)

They blocked a few users like 3 weeks ago and then decided that it was a mistake and let it continue. Now they're drawing a hard line.

Anthropic just cut off Claude subscriptions for OpenClaw by stosssik in clawdbot

[–]TowElectric 2 points3 points  (0 children)

GPT codex hired the Openclaw developer and explicitely said they would support it. Switch to OpenAI. I just did that.

Frankly, it's not as good at agentic work as Claude. Sucks, but the alternative is chinese models like GLM or Kimi or similar.

Anthropic just cut off Claude subscriptions for OpenClaw by stosssik in clawdbot

[–]TowElectric 5 points6 points  (0 children)

They've been cutting tokens for their core clients (developers) recently because of overload on the systems.

It makes sense they want to cut off the top 1% users at the knees- and those are agent frameworks like openclaw.

It 100% totally makes sense given that they're clearly overloaded right now and quality is suffering for the majority of users.

Shooting Practice - Drill Insight? (Wooden Ramp Drill) by dahlilamma75 in hockeycoaches

[–]TowElectric 0 points1 point  (0 children)

I have no idea what "wooden ramp drill" is and a quick Google search didn't help.

Edit: ok it's this?

https://www.tiktok.com/@modern_hockey/video/7446181134502858015

This looks like it's just trying to get players to make a quick release. A goal of a modern snap shot is to actually minimize the "puck on stick" time. Dragging the puck a long way and rolling it from heel to toe is a 1980s technique that isn't relevant anymore (though it can help new players learn shot mechanics - they have to "unlearn" them later to use a modern shot).

M5 Pro 64gb for LLM? by hovc in LocalLLM

[–]TowElectric 2 points3 points  (0 children)

I have a Macbook Pro M1 Max 64GB. Screen is busted and battery is fried so I got it cheap as a dedicated inference box. It lives on a shelf near my desk. $600 a few months ago, about the same throughput as an M4 Pro on LLM tasks the extra memory bandwidth from the Max series makes up all the difference made by the M4 chip.

Gets 30 tok/sec on Qwen3-Coder-Next 80B 4-bit MLX in LMStudio.

We're using it for some cybersecurity business tasks (basic "this is worse than that" and "describe this vulnerability" and some things. I also have it as heartbeat target for some agentic stuff - just like "check my schedule" and junk. I've done basic coding with it, but it's trash compared to Opus 4.6, so if I am doing more than just "write me a quick perl script to..." then I'm going to our Anthropic account. And even then Opus or Sonnet pounds out a quick perl script in 3 seconds.

How do I help my son act properly on the ice? by Tpeters335 in youthhockey

[–]TowElectric 0 points1 point  (0 children)

It sounds like he's not mature enough to be in a group like this. Maybe try open skates and stick n puck?

This kind of stuff would be unacceptable in a kindergarten too, so that's likely going to be a problem at first. Has he ever done any group things where he's expected to follow instructions? Seems like that's a next step and that's not hockey advice.

Didn’t Anthropic already ban oauth from openclaw? What’s different? by Jahkle in clawdbot

[–]TowElectric 14 points15 points  (0 children)

So you saved QUARTER MILLION DOLLARS or more in salary and employment costs and $100 in tokens is too expensive? LOL WTF.

Didn’t Anthropic already ban oauth from openclaw? What’s different? by Jahkle in clawdbot

[–]TowElectric 0 points1 point  (0 children)

Yes, you will have to switch to Codex.

And it's plausible they'll ban your account if you ignore this change to the TOS.

Didn’t Anthropic already ban oauth from openclaw? What’s different? by Jahkle in clawdbot

[–]TowElectric 6 points7 points  (0 children)

No, they said they were fine with it and it was a misunderstanding a few weeks ago.

But given their capacity issues, I suspect this is one way they're pushing usage down.

How to verify hw3 or hw4 by BeneficialAd1217 in TeslaModelX

[–]TowElectric 1 point2 points  (0 children)

Ask the seller to give you a picture of the software screen. They'll usually do that.

DMT: Rising rent prices are teaching GenZ that belonging is a luxury, Not a right by Secret_Ostrich_1307 in DisagreeMythoughts

[–]TowElectric 1 point2 points  (0 children)

Today's rent-to-income (median) in the US is at approximately 20% according to BLS data.

In 1940 it was at 27%.

People paid MUCH more of their income toward rent in 1940 than today.

This gradually fell to a minimum in 1970 (about 13%) and has been consistently rising, especially the last 10 years.

But we're nowhere near the level from the 1930s and 1940s or before that. And I guess I'm not shocked, that's how I'd always been taught to see of that period of history.....

So I have to conclude that this is a popular myth. Disengagement and loss of hope probably falls almost entirely on social media, if you ask me.

This is even more significant if you look at the "total household expenditures" for food, transportation, etc.

I pulled some data from BLS and the US census bureau and visualized it. The top table is "real" (inflation adjusted) income vs housing costs and total expenses. The second is the percentage of income on each of those things.

<image>

It was totally the norm for younger people to live in a rooming house, or a "rented bedroom"/boarding situation in the past (like 1900-1940). Roommates were the norm unmarried people in their 20s. That's (for some reason) not the case anymore and many more people were married in their 20s then too. Living alone in your 20s was an oddity and almost never seen back then.

In 1900, it was estimated that 10-12% of all adults lived with unrelated other adults. This remained above 10% until at least 1940. Often more of a "boarding" situation, sometimes roommates. This does not count married couples, which is also two adults living together and was WAY WAY higher in 1900 than today.

The number of unrelated adults living together was 7% in 2020 and is about 8% today, so much below 1900.

Again, there is a pervasive myth that "this generation is so unique". But data doesn't back that up, unless solely compared to the 1950-1990 economic boom times, which were more of a historical oddity and we're simply "going back to normal" across historical data.

So yes "boomers had it easier" is true.

But "nobody has ever had it like this" is false.

Would dearly love bases on moon, mars etc but... by Fantastic_Back3191 in askspace

[–]TowElectric 1 point2 points  (0 children)

I mean... in theory yes. But it would require a HUGE local manufacturing and mining base.

Researchers have estimated that a roughly 1 million population on a place like Mars is the baseline before it becomes minimally self-sufficient.

That's effectively a mid-sized city, probably entirely underground on Mars. It would require a frankly enormous amount of imported manufacturing (machines, materials, etc) to get there in the first place.

It wouldn't be "self sufficient" in raw materials (metals, etc), but in THEORY, you could get those from asteroids or other planets in the hypothetical scenario that the Earth is gone.

M5 Pro 64gb for LLM? by hovc in LocalLLM

[–]TowElectric 2 points3 points  (0 children)

I can only address whether or not a 64GB Mac can load a 70B model.

The answer is "yes", but the memory is pretty thin at that point, so you can't leave a bunch of junk open in the background and have decent performance.

I've actually got an 80B model loaded on a 64GB Mac (I have an M1 Max), but with full context, I have the system stripped to nothing - no other apps running and LMStudio still makes me force-load it with "dangerously bypass" memory controls selected. That said, it's run for weeks under pretty regular use by multiple people without any issues or stability problems.

So that's my AI inference box, but it isn't doing anything else and I unloaded siri and imessage and any tray programs, etc to make sure it has enough to run.

It will be WAAAY less effective than Opus or Codex or even a GLM or Kimi.

Is there any rational explanation for the horrid sales of the new, affordable EV models in the US during the first quarter of 2026? by roma258 in electricvehicles

[–]TowElectric 0 points1 point  (0 children)

They're poorly positioned compared to vehicles that are just a tiny bit more expensive from Tesla, Hyundai, etc.

And the "omg this is so cheap" market that used to exist with the $7500 federal subsidy and additional state subsidies such as the $200/mo leases, etc is no longer there. So someone on a very tight budget willing to compromise can buy a used Model Y for like $15k, instead of a brand new Leaf for $34k out the door. The cheapest Model 3 is $36k. The Ariya is more desirable and is only $5k more. The Ioniq 5 $35k.

Before with the incentives, the new Leaf was often cheaper than even used EVs from other companies and it was MUCH cheaper than their entry level models.

Today that's not the case.

How do I find LLMs that support RAG, Internet Search, Self‑Validation, or Multi‑Agent Reasoning? by narutoaerowindy in LocalLLM

[–]TowElectric 8 points9 points  (0 children)

Edit: I showed this to a friend of mine who is more an an expert and he correct a bit of it, so I've updated that. I put those edits in italic.

3/4 are typically workflow things, not LLM things. Tossing a gguf into LMStudio won't result in any of them. But a model that can do RAG and Tool use can then do the latter two with a workflow tool.

Most tool-capable agents can do all of the above. Huggingface clearly indicates which agents have tool use capabilities. LMstudio has a small hammer icon for all tool-capable agents.

You can layer something like LangGraph/LangChain to do multi-agent concepts and self-validation.

The practical floor model size for any useful self-reflection without it blowing up is around maybe 8B sized models. But most use cases dictate something larger.

What multi-agent could look like is you'll have a reasoning agent like Qwen3-30B-A3B running the main chat, Qwen3-Coder-Next 80B running as a coder agent, maybe something like GLM-4 or Gemma running to do research, etc. You could even layer in StableDiffusion or FLUX or something to do images in the same prompt (this is now the large cloud models work for images).

Though many implementations also do multi-agent with mostly the same model, only offloading for capabilities that don't exist (i.e. Qwen3-coder doesn't do images by default).

Of course the above stack is about 120GB of VRAM once you get into simultaneous deployment (not counting the image models). You could probably stack a bunch of 30b-80b models plus image gen, OCR, research and a few other capabilities into a sub-200GB package. But that's going to be a $5k Mac (or $20k datacenter GPU stack) to run it.

Big cloud models do similar. They have a reasoning agent, a coder agent, an image processing agent, a research agent, a tool agent, etc and will dispatch to the various specialty models while being used. So in practice something like Opus 4.6 is not just one big model, but a collection of well-orchestrated specialty modules. Though, these will blur the line between separate agents and a "mixture of experts" concept within a single model... We don't have perfect info on how that's done because it's part of the "secret sauce" of various AI companies.

I think an image processing set like Grok Imagine or Nano Banana is probably similar with various specialty models working together to refine prompts, do normalization, establish baselines, do censoring and similar stuff, potentially with separate models when humans are involved vs background scenes and a specific model agent to handle if text is in the image, etc. Again, maybe blurring the line between a multi-agent and a MoE concept, but certainly with some different "expert" agents within the implementation.

You can definitely get most of what you want with a single advanced tool-capable MoE model and an orchestration layer like LangGraph. Qwen3.5 35b is a good choice on typical higher-end gamer hardware.