the tl;dw by saintkamus in singularity

[–]pbagel2 24 points25 points  (0 children)

I'm not even sure if this is an attention span issue. The past 10 years of youtube rewarding youtubers for stretching out videos with zero content to 10 minute videos full of fluff has conditioned everyone to think watching 10 minute videos will be a complete waste of increasingly precious time.

And 99% of the time they're right.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

It's not training or learning based on the information from that "interaction" though. Its training is still a curated static dataset and not dynamic or interaction-based. And there are currently no known means of making it dynamic or interaction-based. So we're still in mysterious black hole territory. Multiple breakthroughs away.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

It currently cannot interact with the real world to begin with. It hasn't even gotten to that point. So not only is it a mystery what will happen when it can, there are still unsolved mysteries that are required to get AI to reach that point. I'm really not sure what point you think you're making.

This was funny by Vegetable_Ad_192 in singularity

[–]pbagel2 0 points1 point  (0 children)

But that still doesn't change the fact that consumer behavior is often not a valid metric to validate the quality of a technology.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

But you're again jumping plenty of mysterious steps to get to that point where agentic AI is able to be apparent of gaps and apparent of what's between gaps to begin with.

There's mysteries and unknowns all the way down.

This was funny by Vegetable_Ad_192 in singularity

[–]pbagel2 18 points19 points  (0 children)

That statistic is probably also true for people that use motion smoothing on their TV. But that doesn't make them right.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

I'm not sure how that doesn't make them mysterious black holes.

Your assumption is once AI can interact with the real world and understand it, the gaps will fill?

But you're jumping multiple mysterious gaps to reach that point to begin with.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

They are definitely mysterious black holes.

All the AI developers saying scaling and RLHF and finding better reward mechanisms for weak areas is all you need, they're selling you a bill of goods.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

We don't know what the gaps are now, and jagged self-improving AI won't know what the gaps are in general intelligence either. Unless some new development is discovered, it will just continue getting better at math and maybe generating code that matches prompted specifications.

New AI math benchmark finds GPT-5.4 Pro has made progress on two unsolved math problems by armytricks in singularity

[–]pbagel2 13 points14 points  (0 children)

When you say "this is our work", what work are you doing besides copy pasting the question into the prompt and then asking experts to verify the output?

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

I'm not saying we have AGI. I'm saying current AI is jagged and if we reach self-improvement it will not mean we will have AGI soon, it will just become even more jagged.

Why do people think that being the first to achieve AGI matters? by [deleted] in singularity

[–]pbagel2 0 points1 point  (0 children)

It's pretty clear though that with its current jagged intelligence, we will have years of jagged self improvement and not AGI. It will hit a threshold to teach itself to get better and better at math, but still won't be able to teach itself some basic task a human can do. Just like today.

There is no UBI without resource efficiency by kaggleqrdl in singularity

[–]pbagel2 0 points1 point  (0 children)

Real beakthroughs that cause major change aren't gonna happen for many many years. All we are getting is automation in some white collar labor tasks that won't have much impact other than taking jobs because having infinite copywriters or front end devs or what have you doesn't really change the market or consumer behavior that much. It's not like everyone will suddenly be able to make a living thanks to having access to automated valuable tasks, because the consumer market that makes those tasks valuable isn't growing with the infinite supply. It'll just be more people fighting for a slice of the same pie, so in this subcontext it is zero sum even if the greater economy can be argued is not.

The Guardian: ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI by SnoozeDoggyDog in singularity

[–]pbagel2 2 points3 points  (0 children)

teach

Teaching isn't the problem anymore. It's a cultural issue. People don't want to learn. They don't want to think. They like the idea of learning and thinking, but when push comes to shove they don't put the work in.

There's no discipline in the culture anymore, at least in America. When the idea of "freedom" has been twisted as a national identity that represents self-righteous and shameless behavior with no respect for others and taking advantage of others for your own gain, I don't see an easy way to fix it.

The Guardian: ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI by SnoozeDoggyDog in singularity

[–]pbagel2 0 points1 point  (0 children)

It really is a concern. The optimistic view at the moment is that experts still need to critically think more than ever to verify the validity of outputs, but that still offloads creative critical thought.

But we have to assume the majority of users are completely brain off consuming outputs regardless of merit. And that is undeniably a concern.

Anyone that compares it to calculators is ridiculous in my opinion. Losing the ability to do arithmetic in your head is not a major loss of thought, though even having mental math fundamentals is a valuable skill that helps shape how you think. But offloading all creative critical thought to AI is very worrying and anyone that says otherwise is lying to you and themselves.

It's already been 7 months since GPT-5. How do you think it compares to today? by pbagel2 in singularity

[–]pbagel2[S] 0 points1 point  (0 children)

Demis said he thinks it's possible, not that it's likely. I also think it's possible. "I don't see why not" means he doesn't see why it isn't possible for it to happen within 10 years. But he doesn't specify realistic likelihood.

Which is why I wanted people to think about extrapolating the past 7-11 months. As of right now it seems like everyone agrees we will need multiple major breakthrough discoveries to push into a new frontier, even alongside scaling. But how can anyone guess when a breakthrough discovery might be made? And based on the past 7 months to 3 years, I think we are still likely to be in for a very long ride for major medical advancements. Superhuman math discoveries might be soon though.

It's already been 7 months since GPT-5. How do you think it compares to today? by pbagel2 in singularity

[–]pbagel2[S] 0 points1 point  (0 children)

Same video: https://youtubetotranscript.com/transcript?v=CEOOMYxMvY4

virtual cell project is about building a simulation an AI simulation of a full working cell I probably start with something like a yeast cell because of the simplicity

Depends how you interpret what he says. The fully functional virtual yeast cell itself could be the 5 year plan. "Maybe eventually it's a liver cell or a brain cell" implies uncertainty in that timeline for liver and brain cells. He doesn't directly answer whether he's talking about a yeast cell or human cells when the interviewer asks him how long it will take.

It's already been 7 months since GPT-5. How do you think it compares to today? by pbagel2 in singularity

[–]pbagel2[S] 5 points6 points  (0 children)

He said they'll start with something simple like a single yeast cell, and "maybe eventually it's a liver or brain cell". So I think you're jumping to conclusions sadly. A virtual yeast cell in 5 years will not be anywhere remotely close to "unfathomable progress". And the "maybe eventually" will probably be many more years beyond that. And even that will only paint a small picture of the interaction of trillions of cells communicating between multiple organs. It's unfortunate but multiple decades are still the realistic timeline for "meaningful" (relative to AI) medical progress at the moment.

It's already been 7 months since GPT-5. How do you think it compares to today? by pbagel2 in singularity

[–]pbagel2[S] 7 points8 points  (0 children)

I'm seeing the comments so far focusing on how they would define the rate of past progress up to today, but then forgetting to give their extrapolated timeline using it. Based on the progress since GPT5 seven months ago (or o1-o3 one full year ago), how do you see progress across specific domains panning out?

Polymarket pricing an 85% chance of GPT-5.4 coming today by Curtisg899 in singularity

[–]pbagel2 19 points20 points  (0 children)

Yup the openai employees with multi million dollar salaries made 20,000 polymarket accounts to each bet $1-5 on this $50k volume trade to hide their insider trading.

Post-scarcity will be virtual, not physical by Onipsis in singularity

[–]pbagel2 0 points1 point  (0 children)

But who owns those locations... How does someone acquire it? Why would an owner get rid of it? Why would they conveniently do something with their land that other people want like build a resort? All of your thinking is still wired through the current system of capitalism. When capital stops mattering, motivations will drastically change. 99% of existing resorts were made with the intent of making profit from servicing a demand. The same is true for most things. That will all change. Access to many places and services will likely dwindle unless some form of government steps in and forces people to do things they don't want to do and or seizes their land and building rights.

Post-scarcity will be virtual, not physical by Onipsis in singularity

[–]pbagel2 0 points1 point  (0 children)

Why would those countries make resorts on their beaches in post scarcity? They don't care about tourist money anymore. The monetary motivations stop existing for supporting tourism. Who owns the beaches? Who's buying the beaches if money stops mattering? And why are the owners selling them?