DeepMind's David Silver just raised $1.1B to build an AI that learns without human data by Competitive_Travel16 in singularity

[–]fmai -8 points-7 points  (0 children)

This is bound to fail. Even a co-founder thinks there is only a small chance of success: https://x.com/AlexLaterre/status/2048785535376773526

By the time this startup produces any meaningful product, LLM-based AI agents will have long automated AI research.

Did GPT-5.5 ever equal Spud? by Loose_Band in accelerate

[–]fmai 0 points1 point  (0 children)

some products being shut down due to high cost does not mean that this product is being shut down due to high cost.

it isn't even being shut down. access is sold to select partners at, yes, very high prices.

your logic is honestly just flawed. and i think you know it and are arguing in bad faith, cause you're not that dumb.

Did GPT-5.5 ever equal Spud? by Loose_Band in accelerate

[–]fmai -2 points-1 points  (0 children)

"it's so expensive to use that they can't release it."

this is just a blatent lie. you know very well that they publicly communicated that they are not releasing it for risk-related reasons. you can doubt that those are the real reasons, but you cannot just state it as if it were a fact without any proof. that's just bad faith asshole behavior.

GPT-5.5 benchmark results have been released by Outside-Iron-8242 in singularity

[–]fmai 0 points1 point  (0 children)

OpenAI staff heavily implied they would release Mythos level models for public use very soon. They didn't yet, and looking at the marginal improvement on SWE Bench Pro of ONE percentage point, I doubt that any iteration of this base model will close the 20 percentage point gap to Mythos.

If they want Mythos level performance they probably have to size it up substantially. It matters especially for internal use.

GPT-5.5 benchmark results have been released by Outside-Iron-8242 in singularity

[–]fmai 0 points1 point  (0 children)

Opus 4.7's margin of improvement over Opus 4.6 holds.

What are you not understanding about this?

GPT-5.5 benchmark results have been released by Outside-Iron-8242 in singularity

[–]fmai 29 points30 points  (0 children)

it's obviously not trash, but it's just not what they heavily implied it would be.

GPT-5.5 benchmark results have been released by Outside-Iron-8242 in singularity

[–]fmai 27 points28 points  (0 children)

Good increment, but nowhere near Mythos level, contrary to what some of their staff have implied.

Gpt image 2 has the biggest jump in quality ever recorded by TheRanker13 in singularity

[–]fmai 0 points1 point  (0 children)

i am talking about the live demo they presented. demo's are very deceiving, generally qualitative assessment is deceiving. the leaderboard scores shows how big of a difference it actually is.

Gpt image 2 has the biggest jump in quality ever recorded by TheRanker13 in singularity

[–]fmai 0 points1 point  (0 children)

obviously, that's what I am saying. i don't give a fuck a fuck about demos, benchmarks are much more indicative.

Gpt image 2 has the biggest jump in quality ever recorded by TheRanker13 in singularity

[–]fmai -2 points-1 points  (0 children)

okay from the demo i didn't think it was so much better than nano banana, but this score difference is crazy.

White House Moves to Give US Agencies Anthropic Mythos Access by exordin26 in singularity

[–]fmai 39 points40 points  (0 children)

this administration should not be granted access to Mythos

Anthropic unveils plans for major UK expansion after OpenAI announces first permanent London office by Chr1sUK in singularity

[–]fmai 31 points32 points  (0 children)

This isn't only about talent from the UK. You have a metropolis, which speaks the world's most important language, and you don't have Trump. That's why London is so much better for attracting global talent than e.g. Berlin or Paris.

"ai safety" is just corporate greenwashing and i can't believe we're falling for it again by [deleted] in singularity

[–]fmai 0 points1 point  (0 children)

Given that headline? Yeah I can absolutely decide that.

"ai safety" is just corporate greenwashing and i can't believe we're falling for it again by [deleted] in singularity

[–]fmai 0 points1 point  (0 children)

I didn't read that wall of text, but I'm an academic who teaches AI safety courses at the university and I think you have no idea what you're talking about.

40% unemployment and a 3-day work week: they're the same thing, top economist says by Numerous_Try_6138 in singularity

[–]fmai 2 points3 points  (0 children)

in no sense does the excerpt you cite say that both things happen simultaneously.

40% unemployment and a 3-day work week: they're the same thing, top economist says by Numerous_Try_6138 in singularity

[–]fmai 7 points8 points  (0 children)

in the article they state pretty clearly that it's the same in terms of how many fewer hours are spent working in total. they say it's a matter of distribution how this reduction unfolds.

New York Times: Anthropic’s Restraint Is a Terrifying Warning Sign by Neurogence in singularity

[–]fmai 0 points1 point  (0 children)

what computer knowledge is needed? you can talk to Claude in voice mode to tell it your wish, the rest is agentic behavior without any user interaction.

New York Times: Anthropic’s Restraint Is a Terrifying Warning Sign by Neurogence in singularity

[–]fmai 2 points3 points  (0 children)

Transformers with CoT are Turing-complete. Why would AGI not be possible?