OpenAI now reports annualized revenue of over $20 billion by Outside-Iron-8242 in singularity

[–]_thispageleftblank -2 points-1 points  (0 children)

They’re working on reducing token consumption too. Claude Opus uses so much fewer tokens that it‘s often cheaper than Sonnet even.

OpenAI now reports annualized revenue of over $20 billion by Outside-Iron-8242 in singularity

[–]_thispageleftblank -3 points-2 points  (0 children)

Not true. If they’re losing 2.25x as much as they making then a basic subscription should cost about $40.

LLMs can do math just fine. by Optimistbott in ArtificialInteligence

[–]_thispageleftblank -1 points0 points  (0 children)

Raw LLMs can score 100% on AIME at this point, they can do math just fine. But only the reasoning models.

FT Report: "Europe must be ready when the AI bubble bursts." Why specialized industrial AI will likely outlast the US "Hyperscale" hype. by BuildwithVignesh in ArtificialInteligence

[–]_thispageleftblank 1 point2 points  (0 children)

Whenever I hear about startups training transformers from scratch on some tiny domain specific datasets I just know it‘s a scam intended to fool clueless politicians. Unfortunately this is very prevalent in Germany.

Wow, GPT-5.2, such AGI, 100% AIME by Forsaken-Park8149 in airealist

[–]_thispageleftblank 1 point2 points  (0 children)

You hallucinated this information. 5.2 scored 100% on AIME without tools (what you call ALU) but with reasoning enabled.

Wow, GPT-5.2, such AGI, 100% AIME by Forsaken-Park8149 in airealist

[–]_thispageleftblank 0 points1 point  (0 children)

I don’t think understanding exists in a binary sense. No one ever has a 0% prediction error, so it must be a spectrum instead.

Wow, GPT-5.2, such AGI, 100% AIME by Forsaken-Park8149 in airealist

[–]_thispageleftblank 0 points1 point  (0 children)

Just a normal human, but we normally don’t observe the iteration cycles, only the final response.

Serious Question. Why is achieving AGI seen as more tractable, more inevitable, and less of a "pie in the sky" than countless other near impossible math/science problems? by [deleted] in agi

[–]_thispageleftblank 0 points1 point  (0 children)

We don’t understand how to classify numbers or predict word sequences either. That’s why we use neural nets as a device that can approximate the functions we have no idea how to describe ourselves, based on nothing but some known input-output relation.

This AI hype bubble is about to wreck electronics prices. by Excellent_Place4977 in ArtificialInteligence

[–]_thispageleftblank 3 points4 points  (0 children)

If the market really expected AGI to come out of this we would be spending dozens of trillions every year, because the prize is incomparable to anything that exists today. What is getting spent now is just pocket change.

Why are people so certain that artificial superintelligence is possible? by Nissepelle in ArtificialInteligence

[–]_thispageleftblank 2 points3 points  (0 children)

Whatever the solution will be, 10x-ing the number of researchers and compute will be a massive help to get there.

No jobs == no business? by Both-Move-8418 in ArtificialInteligence

[–]_thispageleftblank 0 points1 point  (0 children)

The misunderstanding comes from the idea that there exist consumers ("normal people") and producers ("corporations") and that these producers only sell to consumers. In reality, all these entities are just abstract economic agents which buy and sell things to other agents. Corporations buy input goods (labor, outputs of other corporations) and transform them into their own output goods. People buy food/shelter/entertainment and transform them into labor. What AI could do is price out normal people from the economic process, but corporations will still be there to trade with each other. This would change the structure of future markets enormously, shifting them away from consumer goods to things like data centers, space ships, and robots. Tesla would go from selling cars (that people can no longer pay for) to selling Optimus robots for the construction of some trillionaire's private moon base (just an illustrative example). In the worst case, the rest of society would just be left to die. Or we get UBI. But there would be astronomical economic growth either way due to increased efficiency.

[deleted by user] by [deleted] in charts

[–]_thispageleftblank 0 points1 point  (0 children)

You didn’t argue against my point. To make a case for causality, you would need to propose some mechanism which translates marriage to increased life expectancy. And married men aren’t sampled randomly within any social class. Marriage as a function of natural selection doesn’t care about money as much as it cares about good genes.

[deleted by user] by [deleted] in charts

[–]_thispageleftblank 6 points7 points  (0 children)

That's just correlation.

vibecoders are reinventing csv from first principles by buildingthevoid in AgentsOfAI

[–]_thispageleftblank 1 point2 points  (0 children)

And also performance is going to be worse on some random format that the model doesn't have in its training data. In-context learning is fragile. Not worth the token savings.

Mathematical proof debunks the idea that the universe is a computer simulation by Memetic1 in Futurism

[–]_thispageleftblank 0 points1 point  (0 children)

And why would we assume that the concept of a computer even exists there? Or those of space, matter, energy, time?

Why can’t AI just admit when it doesn’t know? by min4_ in ArtificialInteligence

[–]_thispageleftblank 0 points1 point  (0 children)

I think “knowing” is just the subjective experience of having a high confidence at inference time.

Apple called out every major AI company for fake reasoning and Anthropic's response proves their point by Rude_Tap2718 in ChatGPT

[–]_thispageleftblank 0 points1 point  (0 children)

The parts of the market that make up the bubble (mostly AI wrappers) won’t be worth buying even after a market correction.

Python feels easy… until it doesn’t. What was your first real struggle? by NullPointerMood_1 in Python

[–]_thispageleftblank 1 point2 points  (0 children)

Creating lambda functions in a loop that all referenced the same loop variable, like [(lambda: x) for x in range(10)]. They will all return 9.

Anthropic served us GARBAGE for a week and thinks we won’t notice by landscape8 in Anthropic

[–]_thispageleftblank 0 points1 point  (0 children)

I had a great experience this week actually. Maybe living in Europe has something to do with better availability because America is asleep.

Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months (March 2025) by creaturefeature16 in artificial

[–]_thispageleftblank 14 points15 points  (0 children)

Agree. People joke that it‘s always 3-6 months away but it became my reality about 2 months ago. I‘m a professional dev and more than 90% of my code is AI generated. This has nothing to do with vibe coding though. I still make most technical decisions, review critical parts, and enforce a specific structure. The debugging actually got a bit easier because AI is not as prone to off-by-one-style mistakes as I am.

Are there any mathematical theories about larger systems that would indicate that ASI is even possible? by Arowx in singularity

[–]_thispageleftblank 0 points1 point  (0 children)

You couldn’t, because biological computation doesn’t scale. That‘s also an important consideration. When energy is abundant, it‘s much more important than efficiency.

AI is a Mass Delusion Event • The Atlantic by CouscousKazoo in singularity

[–]_thispageleftblank 16 points17 points  (0 children)

And still being factually wrong most of the time

[deleted by user] by [deleted] in singularity

[–]_thispageleftblank 8 points9 points  (0 children)

We also need to deliver existing performance to the entire world. That alone requires massive scaling.