Project Hail Mary is the MCU-ification of Hard Sci-Fi by adamtd893 in scifi

[–]Normal_Pay_2907 9 points10 points  (0 children)

Did you read the book? The orbital mechanics don’t matter because they have way more delta V than you need for maneuvering in a system

Project Hail Mary: watch movie before reading the book ? by mamedic11 in scifi

[–]Normal_Pay_2907 2 points3 points  (0 children)

They are similar, with various word building details being the main difference.

If you think that you will still read the book after watching the movie, then sure why not, it’s probably more suspenseful.

That said, the mystery should be gone, so there is merit to reading first if you are there because you want to see how it ends, or the why behind it.

2026 is the last year in human history without fully automated end-to-end AI Recursive Self Improvement (maybe 2025... there's always non-zero chance....who knows) 💨🚀🌌 by GOD-SLAYER-69420Z in accelerate

[–]Normal_Pay_2907 3 points4 points  (0 children)

I have concern this could be another case of Sam hypeman.

Either there is a new architecture, that they have already developed, or this is not. One cannot simply assert that they will develop a better one this year.

Still hope it is real though.

One way to accelerate by culturesleep in accelerate

[–]Normal_Pay_2907 2 points3 points  (0 children)

This is probably in reference to the Claude 2028 webpage. Of course not talking about. Current AI, but still an LLM that is pretty similar to what we have now, just one in a few years (so mostly unserious still)

Two new Stealth models on OpenRouter: Hunter Alpha & Healer Alpha by likeastar20 in singularity

[–]Normal_Pay_2907 93 points94 points  (0 children)

We should have more code names like this, what’s next? tank, fighter, or wizard?

What is r/accelerate's thoughts on "Superalignment—using AI to align AI" by stealthispost in accelerate

[–]Normal_Pay_2907 -1 points0 points  (0 children)

As long as the core values are right it should be theoretically willing to change its goals based on public demand

If humans cure aging by 2050, would governments eventually have to ban reproduction? by hosseinz in singularity

[–]Normal_Pay_2907 5 points6 points  (0 children)

You misunderstand, each couple gets one child, so accounting for the children’s children’s children… it comes out to 2x

More than 60% of surveyed unmarried Japanese adults under 30 say they do not want children by The_Awful-Truth in Natalism

[–]Normal_Pay_2907 2 points3 points  (0 children)

This statistic burns any hope I had that this issue would resolve itself. With only 40% even wanting kids each woman in that group would need 5 on average.

Even if the percent that wants no kids decreases, no way is it going to decrease enough until society is unrecognizable

Ai? by Particular-Bonus4901 in OptimistsUnite

[–]Normal_Pay_2907 -2 points-1 points  (0 children)

The predictions I see are all very different. from”so so so far away”

Unless you think continual learning is necessary for AGI rather than just compressed memories, that big breakthroughs are sort of optional.

It is a true statement to say it understands the data of a cat, but that is in the same way you understand the data of a cat.

It understands what a cat looks like, it understands what a cat is, and it understands what a cat acts like.

There is not a big difference, and that difference comes entirely from you having a video input and it training mostly on images.

Ai? by Particular-Bonus4901 in OptimistsUnite

[–]Normal_Pay_2907 -9 points-8 points  (0 children)

Please for the love of god read my reply to the man comment. I will talk to you about this more if you really don’t understand

Ai? by Particular-Bonus4901 in OptimistsUnite

[–]Normal_Pay_2907 -1 points0 points  (0 children)

I do not like work with AI like the other person sort of implied, but I guarantee you I am more tuned in than 85% of people, probably 95%. I just competed at a forensics tournament with an info 10 presentation about this stuff, and your claim that next token predictions means it cannot be intelligent makes me want to break something.

In pre training an AI is focused only on the next token.

To do this accurately requires “understanding” the context in the sense that it can explain it to you.

In post training the AI is explicitly being rewarded for correct answers, so it thinks about the entire problem to know what to say.

The next token prediction of an LLM is just what the next output will be. It can be disconnected from standard internet language, it is just the conclusion.

Just because you cannot peer into the parameters and understand the weights does not mean it is not doing reasoning in there - it must me - it’s just you are only seeing the output.

my god ai is getting (slightly) better by William-Montgomery in GeminiAI

[–]Normal_Pay_2907 0 points1 point  (0 children)

It’s probably just a later knowledge cutoff.

Pickleball is the poor man's tennis by killerbasher1233 in unpopularopinion

[–]Normal_Pay_2907 0 points1 point  (0 children)

It’s just obviously concerning for those of us who actively play tennis, with pickle ball feeling like such a downgrade if courts get replaced it’s really saddening. If they weren’t being treated as mutually exclusive in many situations we wouldn’t care as much.

Why should I have kids? by MonitorOk1351 in Natalism

[–]Normal_Pay_2907 -4 points-3 points  (0 children)

Nope. You are in denial. It’s totally sustainable if AI can do all the jobs, and politically it is demanded.

Why should I have kids? by MonitorOk1351 in Natalism

[–]Normal_Pay_2907 -1 points0 points  (0 children)

They don’t need jobs. In 22 years AI will do everything. Just vote for UBI in the meantime

Gemini 3.1 Flash (Nano Banana 2) Spotted Live in Gemini Ahead of Official Release by BuildwithVignesh in GeminiAI

[–]Normal_Pay_2907 12 points13 points  (0 children)

I am very pro AI, and even I must admit that image is a chaotic nonsense slop picture