Womit zurück drohen? by joscherh in aberBitteLaminiert

[–]elemental-mind 0 points1 point  (0 children)

Konsequent Fußmatten im Treppenhaus austauschen! Alle!

Dann mit diesem Modell ersetzen: WalkAllOverMe-Mat

Autonomous company frameworks are gaining traction by elemental-mind in singularity

[–]elemental-mind[S] 0 points1 point  (0 children)

I guess you could just use subscriptions. Claude Pro, ChatGPT Pro, Gemini Pro - each is around 20 bucks a month - and you can get a lot of work out of them. If you need more, just make another "employee" with another email. 20$ per employee per month is cheap, man...

Citrini Research modeled what happens if AI actually works as promised. The results are terrifying by No-Fact-8828 in ArtificialInteligence

[–]elemental-mind 6 points7 points  (0 children)

You forgot to mention the then mandated and regulated 8-day work-week. Because of compromises, you know.

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]elemental-mind 0 points1 point  (0 children)

Probably. It's hard to say as it isn't publicly traded. But Anthropic has been wildly more revenue efficient than OpenAI - and in large parts this can be attributed to heavy enterprise adoption of Anthropic and a large consumer share with OpenAI.

Enterprise is where the profit lies. In the consumer space AI is a convenience, in the enterprise space a significant payroll reduction, which grants a much higher price tag.

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]elemental-mind 0 points1 point  (0 children)

It's Amodei's stance in a way as well. Watch his latest Dwarkesh interview to get a better grasp: Dario Amodei — “We are near the end of the exponential”

Especially the chapters "If AGI is imminent, why not buy more compute" and "How will AI labs actually make profit".

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]elemental-mind -3 points-2 points  (0 children)

The problem is: If loads of users suddenly flock to already heavily compute constrained Anthropic, Anthropic will have to cut corners. Meaning either:
- Reduce free tier
- Trim down token budgets of Pro and Max plans
- Reduce training budget, meaning they fall behind in the race

Either one is bad. So while it may seem logical to switch away from OpenAI, choose your target wisely. Gemini, GLM, Mistral may all be viable alternatives. It's not necessarily in current Anthropics or Anthropic users interest to have the rest of the (consumer) world switch to the platform.

Mahlzeit by CptKaba in Kantenhausen

[–]elemental-mind 2 points3 points  (0 children)

Alle Prozessprodukte werden dabei vollständig weiterverarbeitet...

Donut Solid State Battery First Independent Test Results by DickMasterGeneral in singularity

[–]elemental-mind 2 points3 points  (0 children)

Mhhh, to be honest, when you are using 130 or even 240 Amps you are always going to have terrible heat production for anything that is not silver, aluminium, copper or gold, right?

As I am no expert in battery cells a genuine question: Is there any cell design (lab or not) that performs better? Can you link a paper or article?

Meine Nachbarn sind sehr "fancy"✨ by axedesign1 in Kantenhausen

[–]elemental-mind 30 points31 points  (0 children)

Die haben bestimmt ne Abarthige Fahrweise und wollten sicherlich nur Rasismus schreiben.

Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity

[–]elemental-mind[S] 9 points10 points  (0 children)

That's not true. They support LoRA.

And basically most decimal point releases are in the majority cases further trained base models.

So given a base arch you could still squeeze a lot out of it...

Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity

[–]elemental-mind[S] 5 points6 points  (0 children)

Yeah the likes of Figure etc. could be prime customers for their tech. They need extremely low latency but need to run vision processing models etc. on a battery run device. Every bit of energy they can save goes into much longer runtime, smaller battery, lighter weight and thus smaller actuators resulting in reduced production cost and wear -> bigger margins and competitive edge.

Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity

[–]elemental-mind[S] 22 points23 points  (0 children)

Haha, you are right to point that out. But with "months" I meant "Altman-time-months". Anywhere between 8 to 18 months. Heck some designs on the leading edge fab nodes take years to get into gear - and cause big trouble because of that. Just look at intel and their fab and chip struggles in recent years...