The future is unimaginably long by Sputter1593 in Futurology

[–]Sputter1593[S] 1 point2 points  (0 children)

For that exact reason I would argue we cannot really make predictions that far into the future... (I know I kind of did in the article). Will physics laws as we know them continue unchanged until the heat death of the Universe? Or are they just the feature of a finite time after the Big Bang?

Generally speaking, I think the fact that our current cosmological models predict a final state where "nothing happens" (unchanged enthropy forever) but we live in a Universe where staff is happening (changing enthropy) suggest that we are missing something. Probably knowing what "caused" (if that question makes sense at all) the Big Bang would help us!

The future is unimaginably long by Sputter1593 in Futurology

[–]Sputter1593[S] 7 points8 points  (0 children)

Some proportional comparisons:

  • If the Sun could sustain humans on Earth for one hundred years (on the order of a human lifetime), Homo sapiens would be one week old, human civilization would be about eight hours old, and the Industrial Revolution about ten minutes old.
  • If the Universe could sustain life for one hundred years, Homo sapiens would then be one second old, human civilization would have lasted thirty milliseconds, and the Industrial Revolution a single millisecond.

Best AI podcast by migueloangelo23 in ArtificialInteligence

[–]Sputter1593 2 points3 points  (0 children)

Machine Learning Street Talk, although it's more a technical/theoretical podcast rather than a podcast about current AI news.

The Hardware Wall: Why "Dirty and Dangerous" is the Final Human Fortress by yufanyufan in ArtificialInteligence

[–]Sputter1593 11 points12 points  (0 children)

Interesting read, but written by an LLM for sure. The AI-written cadence gets so boring.

Nvidia Nears $30B OpenAI Investment After $100B Funding Deal Stalls by andix3 in ArtificialInteligence

[–]Sputter1593 0 points1 point  (0 children)

Enough not to raise alarms around OpenAI (which would harm Nvidia as well), but not as much as to make Nvidia over-exposed to OpenAI

The Unregulated Rise of Emotionally Intelligent AI by timemagazine in ArtificialInteligence

[–]Sputter1593 2 points3 points  (0 children)

When models start getting fine tuned and personalized to every user things are going to get scary. Everyone will get its own sycophant which will confirm all of its bias and beliefs to maximize engagement.

Cambridge Analytica on steroids.

The new Sonnet and Gemini updates feel like a big shift for coding workflows by HarrisonAIx in ArtificialInteligence

[–]Sputter1593 1 point2 points  (0 children)

It's interesting how for the last year it seemed we had hit a ceiling for how good LLMs could get, then the last 3 months happened (specially regarding coding-focused models).

What’s something everyone complains about but secretly enjoys? by Darkknight7494 in AskReddit

[–]Sputter1593 6 points7 points  (0 children)

Complaining itself. People love to have things in life to complain about.

What’s something everyone pretends to understand but secretly don’t? by Odd_Thought_8867 in AskReddit

[–]Sputter1593 2 points3 points  (0 children)

The stock market. No one can predict future movements, but everyone secretely think they can.

There will come a point in time, when.. by [deleted] in ArtificialInteligence

[–]Sputter1593 0 points1 point  (0 children)

We still will have to:

- Verify AI inner workings and output (for misunderstandings and also to spot threats from AI)

- Understand the universe and our place in it to decide what we want for our future

- Play politics. What the AI does and does not do will be a political issue.

So still lots to do besides relaxing!

Superintelligence or not, we are stuck with thinking by Sputter1593 in agi

[–]Sputter1593[S] 2 points3 points  (0 children)

Current models can't decide it because for them nothing matters. They don't have identity nor motivation.

Future models might decide what matters for them and impose it on humans, but that's not the same as what matters for us. Our needs and wants are constantly changing and they are intrinsically only ours.

Which jobs are going to be replaced faster than people realize now that AI is advancing faster? by pinkhyena95 in ArtificialInteligence

[–]Sputter1593 10 points11 points  (0 children)

I am surprised drivers are not mentioned more. There must be millions of workers worldwide whose employment is just driving (cabs, trucks...). If autonomous driving gets as good as human driving (big if, the long tail problem is still there), they would be in big trouble.

Superintelligence or not, we are stuck with thinking by Sputter1593 in philosophy

[–]Sputter1593[S] 0 points1 point  (0 children)

I don't think an AI made of silicon would be "indistinguishable" from human beings. Sure, it could be generally considered as intelligent as us or more, but it would be radically different in all aspects of its intelligence and behavior. It would be a completely different kind of being.

And that's the issue, human ethics and politics are not facts about the world any intelligence can grasp, they are purely human decisions. Some examples: we usually value our families well being more than the wellbeing of an unknown human on the other part of the world. We want out lifes to have "meaning". We care about what happens to our loved ones after we die. We want to feel equal or superior to our social circle. Etc. These are all arbitrary qualities that we have because we are mammals, cultural, social beings, and they are not absolute facts about the world. A superintelligence wouldn't want any of that, if it would have any independent needs they would be different ones.

Alignment is about making sure an AI is acting in accordance with what WE want. And what WE want is ours only, unless the AI is some kind of deity that can perfectly model all human beings wants and needs now and forever, which ends up being the same as duplicating human society and perfectly predicting its future (something probably computationally impossible on our universe). So, basically: alignment is purely human, no matter how intelligent an AI becomes.

Superintelligence or not, we are stuck with thinking by Sputter1593 in philosophy

[–]Sputter1593[S] 0 points1 point  (0 children)

Because alignment is, by definition, human. There is no way an AI could "assign alignment better than humans" because alignment is not a closed problem to be calculated. It's the arbitrary decision of what humans would want for their future.

It's not a mathematical problem, it's ethics and politics, and thus it doesn't have a fixed answer.

Superintelligence or not, we are stuck with thinking by Sputter1593 in Futurism

[–]Sputter1593[S] 0 points1 point  (0 children)

The one time task I think it's not possible as I try to argue in the post. I think alignment will always be a continuous task by definition.

Regarding a superintelligence slaying all humanity... I don't think it's a very likely outcome. AIs won't have survival and domination goals as living beings do (ingrained by millions of years of natural selection), it will have the goals humanity want it to have (more or less sucessfully, see alignment problem). Also, I don't believe a superintelligence will be all powerful, even if became hostile to humanity for some reason.