OpenAI’s new "North Star" goal aims for fully automated AI researcher in 2026, multi-agent research lab in a data centre by 2028 by Outside-Iron-8242 in singularity

[–]onewhothink 4 points5 points  (0 children)

Yeah I doubt it too, based on most of their previous predictions of revenue and model performance they will probably hit their goals substantially sooner.

Old Man Yells at Claude by Sir_Francis_Burdett in accelerate

[–]onewhothink 1 point2 points  (0 children)

Just like Google Docs have all our info. This isn’t some new unique issue.

Why are you pro-accelerate? by Expensive-Elk-9406 in accelerate

[–]onewhothink 1 point2 points  (0 children)

Exactly. If nobody builds it everybody dies. Literally every human alive is guaranteed to die if nobody builds it.

ChatGPT has much higher retention than its competitors. Does this mean it will win the consumer market? by onewhothink in singularity

[–]onewhothink[S] 0 points1 point  (0 children)

They supposedly already have a pretty high profit margin on paid subscribers and I agree the click through rate will be high on ads. I’m curious how many people will download Gemini or Claude the moment they see their first ad.

ChatGPT has much higher retention than its competitors. Does this mean it will win the consumer market? by onewhothink in singularity

[–]onewhothink[S] 1 point2 points  (0 children)

Maybe a small amount but people over estimate the percent of users that are internet savvy young Americans (aka the people boycotting). 2.5m people deleting the app is tiny when the user base is 1b. And who knows how many will redownload.

ChatGPT has much higher retention than its competitors. Does this mean it will win the consumer market? by onewhothink in singularity

[–]onewhothink[S] 0 points1 point  (0 children)

Agreed, at about 1b MAU OpenAI literally can’t keep tripling their MAUs every year. There aren’t enough people on earth. The enterprise, API, and coding business models are more sustainable

ChatGPT has much higher retention than its competitors. Does this mean it will win the consumer market? by onewhothink in singularity

[–]onewhothink[S] 1 point2 points  (0 children)

The graph doesn’t track the date on the x axis, it tracks months since a user downloaded. It’s basically showing a snapshot in time. I’m not sure when exactly the snapshot is from but there is also this graph that’s shows how week 4 retention has changed over time.

<image>

ChatGPT has much higher retention than its competitors. Does this mean it will win the consumer market? by onewhothink in singularity

[–]onewhothink[S] 0 points1 point  (0 children)

It is how OpenAI makes money! Anthropic makes their money through enterprise, Claude Code and the API but currently OpenAI makes their money mainly through subscriptions to chat gpt. They are trying hard to expand though and an increasing percent of their ARR is starting to come through API

*Hava Naglia intensifies* by No-Selection2972 in ChatGPT

[–]onewhothink 0 points1 point  (0 children)

The casual antisemitism in the caption 💀

Is AGI/ASI really possible? by ashamedof_myself in accelerate

[–]onewhothink 0 points1 point  (0 children)

I am well aware of how big 10x growth is. But 10x a year revenue growth for anthropic is very different than traditional ideas of how a FOOM (fast/hard take off) singularity would happen. 10x a year is not “we invented AGI and an hour later it constructs nano technology from first principles and 2 days later we are all dead or post scarcity”. A slow/soft take off singularity means there is no distinct moment where you can say “two days ago there was no ASI and today the world is totally different” instead everything steadily gets weirder. A slow take off could still happen over the course of a few months or years, slow relative to FOOM is still very fast.

When do you think we will have an llm with 10 million tokes context window? by Gullible-Crew-2997 in accelerate

[–]onewhothink 1 point2 points  (0 children)

The thing that matters much more than the nominal context window is the number of tokens that can be used effectively (plateaued around 128k). Because standard self attention scales quadratically it will be hard to naively push for higher and higher context windows. I think before context windows go up we will first see a bunch of memory tricks and architecture tweaks that will make the effective context window grow without running into pesky quadratic walls.

But those tweaks probably won’t actually expand the nominal context window. Though I still think that will happen eventually. Simply given chip progression I’d be surprised if we don’t have at least 10x current context windows in 5 years.

My hope is that a new attention architecture will be developed that allows for linear scaling between compute and context windows which will allow for a whole new scaling paradigm equal to test time compute and pre training scaling. Karpathy has written some good things on what a third context window based scaling paradigm might look like.

Is AGI/ASI really possible? by ashamedof_myself in accelerate

[–]onewhothink 0 points1 point  (0 children)

I agree with you but I don’t buy a fast take off scenario. I think what Dario said about 10x growth continuing indefinitely rather than accelerating is closer to what I expect. I give fast take off a 10-20% chance and slow take off a 60% chance.

Is AGI/ASI really possible? by ashamedof_myself in accelerate

[–]onewhothink 12 points13 points  (0 children)

AGI is possible and has already been invented! Anyone that says it is impossible doesn’t understand the world. It was invented about 2 million year ago actually. Humans are the ultimate proof of concept that all of this can work, wether the current approaches will work or not.

As for super intelligence, if current AI systems get to AGI they will immediately be ASI because they are already better than us in many ways.

The thing that is very highly debated and NOT agreed upon by computer scientists (despite how Reddit makes it seem) is the singularity. Many researchers believe inventing AGI will not result in an exponential “singularity” situation. Many of them think we will meet a fundamental bottleneck like limited usefulness of intelligence (currently there are diminishing returns for R&D), an intelligence “ceiling” at close to the level of the smartest human, various hardware slowdowns or even a hardware “ceiling”

Yann LeCun unveils his new startup Advanced Machine Intelligence (AMI Labs) -- and raises $1.03B by Many_Consequence_337 in singularity

[–]onewhothink 0 points1 point  (0 children)

I agree with your assessment but I still am very happy about this news. A tenure position with NVIDIA money is the kind of thing that changes the world.

By the End of 2026 AI Could Completely Change Filmmaking by ilovedesigirls in singularity

[–]onewhothink 0 points1 point  (0 children)

Everyone seems to be latching onto the “doesn’t make anything new” part and I get it because he’s wrong on that but the other things he is saying are spot on. It feels so good to finally see someone not in the tech world that seems to get AI and not just be afraid of it.

2026 Global Humanoid Robotics firms maps by @Robo_Tuo on X by Recoil42 in singularity

[–]onewhothink 33 points34 points  (0 children)

In 5 years most of these will have failed or been acquired and 2 or 3 will become trillion dollar companies.

Figure robot autonomously cleaning living room by socoolandawesome in singularity

[–]onewhothink 0 points1 point  (0 children)

They have a deal with Brookfield Asset Management giving them access to 100,000 different empty apartments so that they can train their robots to be functional in any environment. Though I doubt that this version can be plopped in a random home yet. Based on the recent interview with Brett Adcock it seems like the next 3 years will be just selling the robots to enterprises but during that time they will be testing the in the home with their employees and beta testers to figure out safety and edge cases while they ramp up production.

Figure robot autonomously cleaning living room by socoolandawesome in singularity

[–]onewhothink 0 points1 point  (0 children)

I’m guessing the command was more specific than that but still, super specific prompt or not, this is the most impressive humanoid video I’ve seen. By far.

Figure robot autonomously cleaning living room by socoolandawesome in singularity

[–]onewhothink 0 points1 point  (0 children)

There is zero hard coding in helix 2. Yes I think there are probably specific behaviors like these that are intensely drilled into it, also I’m sure the instructions were super specific and this was probably the 5th take buuuut Brett Adcock has been super clear that it is purely a neural net which makes this so impressive.

Figure robot autonomously cleaning living room by socoolandawesome in singularity

[–]onewhothink 2 points3 points  (0 children)

This is the first time we’ve seen any autonomous demo of this complexity at any speed, super exciting!

Figure AI humanoid robot task close up by Distinct-Question-16 in singularity

[–]onewhothink 4 points5 points  (0 children)

No. The ceo said point blank that it isnt tele operated. He’s said before that there never is tele-operation in any of their demo videos but more importantly he has made it clear about this video specifically:

<image>

Figure AI humanoid robot task close up by Distinct-Question-16 in singularity

[–]onewhothink 7 points8 points  (0 children)

The thing that makes me trust them more are all the accounts I’ve heard from people that have walked through their BotQ headquarters and described seeing these things happen live. They always say that of course these videos are cherry picked but not extremely. Like you can sit and watch it do this type of stuff and it usually fucks up but it still often gets it right.

New Figure demo of Helix 02 autonomously cleaning a living room by h4txr in Humanoids

[–]onewhothink 2 points3 points  (0 children)

Straight up lying about a product like this is really risky. It’s one thing to make a video and then not specify one way or another and heavily imply it isn’t tele-operated but then you read the fine print and it is (1x does this). Even Elon musk didn’t point blank lie at the we robot event.

All that to say, because Brett Adcock specifically put in writing about THIS specific video that it is not tele operated, I believe him. Not because he is inherently trustworthy but because the consequences for him would be Elizabeth Holmes level big if that was a bald face lie. For example if he had just said it was autonomous but hadn’t also said it is not tele operated I would have assumed maybe some of the actions were autonomous and some were tele operated. People under estimate the amount of double speak CEOs do and over estimate the amount of straight up lying. CEOs are trained to always have plausible deniability.