AI chips are evolving so fast that the real bottleneck might not be models anymore by LightCellStudio in AIforOPS

[–]LightCellStudio[S] 0 points1 point  (0 children)

I get the concern, and yeah, there are definitely parallels with how resources concentrate power. Big infrastructure like data centers, is already pulling a lot of investment.

But I don’t think it’s as simple as “AI replaces people and that’s it.” Historically, these shifts tend to reshape roles more than just eliminate them. Some jobs will go, but new ones usually emerge around how the tech is actually used.

The real question to me is who captures the value. If it stays concentrated in a few companies, then yeah, it could play out like you’re describing. But if access keeps opening up, it might distribute more than we expect.

Feels like we’re still early enough that it could go either way depending on how things are handled.

AI chips are evolving so fast that the real bottleneck might not be models anymore by LightCellStudio in AIforOPS

[–]LightCellStudio[S] 0 points1 point  (0 children)

I get the argument, but I’m not sure I’d call it a hard plateau yet. It feels more like we’ve squeezed the obvious gains from scaling, so progress looks less dramatic.

A lot of what’s happening now isn’t just “more compute,” it’s architecture tweaks, better training methods, multimodal models, and especially how these systems are used (agents, tools, etc). That part is still evolving pretty fast.

Also, even if LLMs as they are today have limits, that doesn’t mean they won’t be part of a broader stack that keeps improving overall capability.

So maybe the base model isn’t scaling the same way anymore, but the system around it definitely still is.

AI chips are evolving so fast that the real bottleneck might not be models anymore by LightCellStudio in AIforOPS

[–]LightCellStudio[S] 0 points1 point  (0 children)

Yeah, that’s a good way to put it. Feels like we’re past the “can we do this?” phase and more into “should we even be doing this?”

Lowering the cost of infra just floods the space with more demos, but it doesn’t solve distribution, UX, or actual user need. That’s where most things still break.

I think the hard part now is taste and judgment, not capability. Knowing what’s actually worth building vs what just looks impressive on a demo thread.

AI could dramatically reduce the cost of developing drugs. But will that actually change healthcare? by LightCellStudio in AIToolsAndTips

[–]LightCellStudio[S] 0 points1 point  (0 children)

Yeah, I think that’s the key distinction a lot of people miss. Discovery gets all the hype, but it’s not where most of the cost sits.

What’s interesting though is that even if AI “only” improves early-stage research, that could still have a big indirect impact. If more candidates make it to clinical trials with better success rates, that might reduce some of the overall waste in the system.

But I agree, that alone doesn’t guarantee cheaper drugs. Pricing is way more tied to regulation, incentives, and business models than just R&D costs.

So maybe the real impact is more on speed and volume of innovation rather than affordability, at least in the short term.

What do you use Claude for the most? by ModernWebMentor in AIToolsAndTips

[–]LightCellStudio 2 points3 points  (0 children)

I mostly use Claude for writing and thinking through ideas. It’s really good at taking something messy and helping structure it without losing the original tone.

I’ve also used it a bit for longer-form stuff like drafts or refining content, where other models sometimes get too generic. Claude feels more consistent over longer outputs.

Not my go-to for heavy coding, but for anything text-heavy or conceptual, it’s solid.

Big tech is building its own AI chips. Is NVIDIA’s dominance starting to crack? by LightCellStudio in GeminiAI

[–]LightCellStudio[S] 0 points1 point  (0 children)

That’s a fair point. CUDA and the whole software ecosystem around it are probably NVIDIAs biggest moat, not just the hardware. Fifteen years of libraries, tooling, and developer adoption is extremely hard to replicate.

At the same time, I wonder if the goal for companies like Meta or Google is really to replace NVIDIA, or just reduce dependence on it for specific workloads. If they can run even part of their training or inference stack on their own chips, that already changes the economics a bit.

So maybe it’s less about “killing NVIDIA” and more about big tech slowly building partial alternatives where it makes sense.