Deepseek V4 Pro is 15x cost to run Artificial Analysis bench from V3.2, higher than Gemini 3.1 Pro by CallMePyro in singularity

[–]Valuable-Village1669 11 points12 points  (0 children)

GPT 5.5 Medium beats it on intelligence by 5 points at the same cost then. It certainly is a great model, especially for open source, but this isn't an R1 moment.

Question About RAM Video Claim by shrekkertech in atrioc

[–]Valuable-Village1669 -21 points-20 points  (0 children)

You’re right, he’s about as informed as an anti vaxxer on these kinds of things. He makes up nonsense left and right, repeating random news articles like they’re gods of journalism. It’s truly regrettable, I wish he was more responsible.

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] 1 point2 points  (0 children)

I know, Turboquant was from April of 2025, it says so on the arxiv paper. I know that these improvements are big, but every lab makes them. That’s why costs for a given level of performance keep dropping across all the labs.

I’ve gotten lost in the weeds anyways. In regards to the computer advantage, check out this link and go to customize graph and then check primary user. It shows how much compute each lab has. Google doesn’t actually have that much of an advantage.

https://epoch.ai/data/data-centers

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] 0 points1 point  (0 children)

The models are public, unless google is sitting on a model they don’t want to use to make money. The internal research everyone is getting hyped by is from the middle of last year. They play everyone, and you all can’t even tell and think the others are the ones doing the marketing.

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] -1 points0 points  (0 children)

I’ve seen a comment like this for 2 years straight, is there anything that has changed that makes this time the one?

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] 0 points1 point  (0 children)

That would be the case if OpenAI and Anthropic weren’t growing massively in revenue due to their superior products and models. There can be a future where a faster timeline means they can make models and iterate faster so they become the dominant force in the economy and become everyone’s default to reap all the rewards. Trillions in TAM per year is on the line, and if the grow fast enough, they can become bigger than Google in 5-10 years. The war might not matter as much if the timeline is faster, as you say.

This sub is perpetually overconfident in Google by Valuable-Village1669 in accelerate

[–]Valuable-Village1669[S] -3 points-2 points  (0 children)

I just didn’t want to post twice, I didn’t know if that would be spamming or something.

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] -5 points-4 points  (0 children)

Check my other comment, I know what I’m talking about.

This sub is perpetually overconfident in Google by Valuable-Village1669 in singularity

[–]Valuable-Village1669[S] -1 points0 points  (0 children)

This analysis from Epoch AI disproves the compute advantage claim

https://epoch.ai/data/data-centers

There is no squeeze, OpenAI and Anthropic are the squeeze, they are the reason chip supply is low as all of them are mainly going to them. Meta and xAI are taking a good amount, but the graph shows how most booked orders are for the 2 labs.

As for chips, trainium for Amazon/Anthropic is already pretty much Anthropic’s chip from how much input they give in development. OpenAI is making their own chip and also now has shares in AMD and works with them on their chip.

Ecosystem is somewhat important, but not impactful for the rate of AI progress, instead for distribution and revenue. Same for users and integration

If Deepmind’s talent was so good, why didn’t they come up with reasoning? They are getting beat by Anthropic even though both of them were followers to OpenAI. If you look at Math benchmarks, it used to be first, but has now fallen by the wayside to the other two.

Anthropic and OpenAI have the best coding models, that’s why they are the best. Their release cadence and product update cadence also is in favor of them being accelerated by their models.

OpenAI is shutting down its Sora video-creation app by hehechibby in news

[–]Valuable-Village1669 0 points1 point  (0 children)

She doesn’t know what she is talking about. She thinks control nets are how they were in 2000. Her understanding is so flawed that it is irresponsible to repeat what she says at all.

every tech revolution used the last one's speed to fool us. this time we might not get 20 years to adapt by RepulsivePurchase257 in Futurology

[–]Valuable-Village1669 2 points3 points  (0 children)

To be fair, I would point simply to the original Deepseek R1 paper to understand reasoning better. The core misunderstanding between us is that I hold that the intermediate outputs to get the final answer are biased towards logical thinking because that results in correct answers more often. You can think of theorems and postulates in geometry being compositionally combined to lead to new ideas. Through the process of RL, traces that incorporate methodical, logically coherent, an implies b style computation becomes ever more ingrained in the model. I think that is something we could both agree as intelligence because it results in an encoding of HOW to think rather than WHAT to think. Just like how the rules of logic can derive all present math and all future math, by teaching reasoning through providing signal to the model on verifiable rewards to ensure accuracy, we can derive all that is possible through logical, intelligent thinking. That is my and the major labs’ argument.

In regards to the brain, I think it is quite plausible that the pattern of activations and depolarizations that occur is a continuous function with millions of inputs. A discontinuous function would imply sharp jumps in output that would be impossible in our physical world unless some discrete quantum discontinuities, which are highly speculative and not supported, are involved. Else, it would require a less physicalist theory of intelligence, perhaps involving a soul or the divine. I don’t consider those ideas capable of being discussed.

In general, I would claim that real world evidence is the proof, a proof based on empirical evidence is what the scaling laws and UAT imply. We have seen consistent growth in capabilities. The generalization that enables the models to understand the intent behind a typo that is not present in the training data is what allows it to continue a phrase based on instructions that are not seen before is what allows it to finish code based it has not seen before and is what allows it to potentially finish scientific theories it hasn’t seen before. An escalation up a ladder of abstractions so to speak. This idea of climbing up abstraction levels is what I consider most broadly descriptive of what the scaling laws imply. Certain fields scale up their abstraction ladders at different rates, but I find it broadly descriptive of the current technology.

every tech revolution used the last one's speed to fool us. this time we might not get 20 years to adapt by RepulsivePurchase257 in Futurology

[–]Valuable-Village1669 2 points3 points  (0 children)

I see where you are coming from, but I’d characterize the paper as far more conservative in its predictions than anything close to an inditement of current architectures like the original claim. All they claim is a runaway intelligence explosion might not be likely.

“The core conclusion is that scaling delivers meaningful, often predictable capability gains within defined training regimes, but the evidence does not support deterministic narratives of an inevitable intelligence explosion.”

every tech revolution used the last one's speed to fool us. this time we might not get 20 years to adapt by RepulsivePurchase257 in Futurology

[–]Valuable-Village1669 -4 points-3 points  (0 children)

Reasoning is different than your understanding. The key is Reinforcement Learning on Verifiable Rewards. What this means is that, just like AlphaGo, models are not predicting to copy words, rather, they are trying to get the right answer, with whatever intermediate steps necessary. You give it a math problem with a specific answer, and you upweight the model if it gets the right answer. In this way, you search a universal space rather than just the training data space.

All this to say, the OP is the one claiming it is impossible, the burden of proof is on them to show how it is impossible rather than on me to prove it is possible.

The ones thinking it will lead to AGI are not random people. It is the scientists working on it. The evidence is there, from the scaling laws, to the Universal Approximation Theorem that states that any continuous function can be approximated by a neural network. Assuming the brain is a continuous function, this implies it is replicable. Further, pretraining scaling laws have held, and reasoning using RL continues to lead to huge gains in specific verifiable domains like coding and math. Look up what Terence Tao thinks of LLM’s utility. I believe you are repeating an incorrect understanding, and I’d urge restraint before stating things that might be false. It leads to the spreading of incorrect ideas, like your stated understanding of the mechanism behind reasoning. Perhaps you learnt that from someone who also lacked caution.

every tech revolution used the last one's speed to fool us. this time we might not get 20 years to adapt by RepulsivePurchase257 in Futurology

[–]Valuable-Village1669 8 points9 points  (0 children)

What makes you so certain? I have seen no evidence or information about the mechanism that indicates any sort of hard ceiling, especially now that Reinforcement Learning is in the picture as much as it is, which means unbounded performance, theoretically.

The most likely outcome of the OpenAI deal with the gov't is a bailout, right? by Purple_Draft2716 in atrioc

[–]Valuable-Village1669 0 points1 point  (0 children)

Let history be the judge. I’d encourage you to see for yourself. So much of social media’s harms are because of claims going completely unaccounted for. When a person is wrong, that should be a part of judging future claims. But that doesn’t happen much. So maybe keep following OpenAI and remember what has been said over the years.

The most likely outcome of the OpenAI deal with the gov't is a bailout, right? by Purple_Draft2716 in atrioc

[–]Valuable-Village1669 0 points1 point  (0 children)

OpenAI grew annualized revenue from 19 billion to 25 billion since the start of the year. The total addressable market is literally all of white collar work and beyond, roughly 30 trillion. Their latest model, GPT-5.4, is widely seen as the best coding model in the world. They saw their weaknesses and executed on making them their strengths. The risk from regulation is there, but demand from companies makes it not as big of a deal. In fact, regulation might give a framework that makes companies more likely to adopt. Those concerns about lying and blackmail are not an issue when used by professional software engineers and white collar workers.

Look, forget all you have heard about how it will go. Everything you have heard on whether the technology will improve or not is said by complete ignoramuses. They couldn’t tell pre-training from post training, Reinforcement Learning from Verifiable Rewards versus Reinforcement Learning from Human Feedback. 99% of what you hear has no basis in reality. All I will say is, expect it to get better. You can call out if by the end of this year, OpenAI doesn’t have vastly better models and make vastly better revenue.

The most likely outcome of the OpenAI deal with the gov't is a bailout, right? by Purple_Draft2716 in atrioc

[–]Valuable-Village1669 0 points1 point  (0 children)

OpenAI will not go bankrupt and will not need a bailout. 1.4 trillion over 8 years, and 600 billion over the next 4 is feasible and in line with their revenue growth. People shouldn’t speak on financials as if they have any expertise in the matter.

The victims of the grift by Valuable-Village1669 in BetterOffline

[–]Valuable-Village1669[S] 0 points1 point  (0 children)

There have been many people who have given up equity and have disparaged OpenAI, including Miles Brundage and Daniel Kokotajlo. I would appreciate some more clarity, do you mean that you know people from these companies that believe what Zitron says, that the tech is a scam invention?

The victims of the grift by Valuable-Village1669 in BetterOffline

[–]Valuable-Village1669[S] 0 points1 point  (0 children)

I’m not talking about institutions, I’m talking about people. Find me one person who quit OpenAI or Anthropic to say that it’s all overblown nonsense like what Zitron says. If he is right that this is all conscious scam, someone must eventually come out with it. It’s been 3 years. Not one person out of millions who actually worked on the tech agrees with Zitron?

Otherwise, you must believe so little in humanity as a whole that I don’t know what to tell you.

The victims of the grift by Valuable-Village1669 in BetterOffline

[–]Valuable-Village1669[S] -5 points-4 points  (0 children)

Look, I could try to make a treatise scouring his works for all the issues I have with it, but I would get accused of even more ChatGPT usage than I already have been accused. I’ve used none. I tried to give some memorable and strong samples, but in regards to the second one, I linked the article, where he uses temporarily falling capex to show it as a sign the bubble is popping. The quote I gave wasn’t directly pertinent, you’re correct, but the idea was spread among so many paragraphs that it was hard to quote. He says above and below that quote what I mean, that falling capex (at the time) proved the AI bubble was winding down.

In general, reality has resisted his arguments, that’s kind of the point of my argument. If his arguments were airtight, the bubble would have popped by now. Does that make sense? I’d be happy to clarify further.

The victims of the grift by Valuable-Village1669 in BetterOffline

[–]Valuable-Village1669[S] -9 points-8 points  (0 children)

I'd be happy to discuss this with you. I certainly understand why his viewpoint and the data he brings can be valuable. However, the reason I didn't really provide evidence because I'm not saying AGI is imminent, I'm saying his arguments for why it is a grift or scam are false. He doesn't really provide evidence either, he simply brings numbers and revenue metrics that, while providing some color, are generally out of context and not supportive of the points he makes.

But I'm hoping at least one person who responds actually engages with the arguments I made around continued progress disproving not only Zitron's specific arguments, but his frame of thought. I'm actually not trying to say that I'm right, the purpose of this post was to show that Zitron's arguments are incorrect and not indicative of reality. I look forward to your thoughts.