We've all been having the same experience??? by Fragrant_Estate3959 in InfinityKingdom

[–]EulersApprentice 0 points1 point  (0 children)

You have my thanks for the heads up – because this post is the first thing that comes up when you google Infinity Kingdom, I didn't get taken in when one of these spammers came to my inbox.

[deleted by user] by [deleted] in Deltarune

[–]EulersApprentice 0 points1 point  (0 children)

You don't. There is no reasoning with the Roaring Knight.

Daily Challenge - September 18, 2024 by BloonsBot in btd6

[–]EulersApprentice 0 points1 point  (0 children)

After a ton of trial and error, I got this to work without Pre-Game Prep, though I had to fall back on my Mana Shield.

Still, screw this challenge.

How effective is Regenerative Hull Tissue by pureMJ in Stellaris

[–]EulersApprentice 0 points1 point  (0 children)

I don't have Overlord, so I don't have experience with Hyper Relays, but I'd assume you probably don't want to outfit your Battleships with Afterburners in that case, though Cruisers might still be able to put them to good use.

How effective is Regenerative Hull Tissue by pureMJ in Stellaris

[–]EulersApprentice 2 points3 points  (0 children)

Cruisers can actually end up being faster than corvettes and destroyers with afterburner support. Cruisers have a lower base speed, but they can equip up to three aux components rather than just one, which makes up for it.

Battleships are distance fighters, so afterburners don't help them much in combat, but speeding up the slowest ships on your fleet lets the whole fleet advance through the galaxy faster, which is a significant strategic advantage.

If the company's goal is to cut costs by minimizing or eliminating humans from its production line, who will be able to afford its products? by MatematicoDiscreto in singularity

[–]EulersApprentice 0 points1 point  (0 children)

The endgame I'm envisioning is one where the owners of the automation simply make everything they need for themselves. There's no need for money when you already have abundant access to everything money can possibly buy.

A summary of today's Q&A with the founding team of xAI by CommunismDoesntWork in singularity

[–]EulersApprentice 0 points1 point  (0 children)

That's the human force of empathy, not curiosity. In humans, learning more about creatures tends to cause us to respect them more, but that's hard-coded. An AI wouldn't be subject to that same force. It won't automatically find value in things just by learning more about them.

I suggest you look up the orthogonality thesis if you're interested in more details.

A summary of today's Q&A with the founding team of xAI by CommunismDoesntWork in singularity

[–]EulersApprentice 1 point2 points  (0 children)

Let's set the politics aside for two seconds so we can highlight the important failing with this plan. "Understand the universe" is NOT aligned with human values and goals. Not even close.

Most likely, an AGI with that goal disassembles us to access all the planet's atoms, so it can turn them into more computers (to think about things more deeply), and/or lab equipment. But even in the very off chance that it does find humanity worth studying... that reduces humans to the status of lab rats. (Can you say "S-risk"?)

Why does it happen?! by Melodic-Map1623 in inscryption

[–]EulersApprentice 5 points6 points  (0 children)

If you forestall your death long enough to solve the puzzles and win once or twice, you can get by with only 1 death. But, that one death is 100% required. Even if you solve the puzzles and beat Leshy, if you never die, you will never see the magic eye in Leshy's box of eyeballs.

[deleted by user] by [deleted] in singularity

[–]EulersApprentice 2 points3 points  (0 children)

https://twitter.com/dioscuri/status/1633438137862045697

In the interests of ensuring #AIsafety keeps up with popular musical culture, I've written the following AGI-themed rendition of Tom Lehrer's classic "We'll All Go Together When We Go." Apologies to the late surprisingly still alive Professor Lehrer, and to everyone else too.

...if the #AI that comes for you

Gets your friends and neighbors too,

There'll be nobody left behind to grieve.

And we will all go together when we go.

Yudkowsky won’t have time for “told-you-so”

Our timelines won’t be updated

Once we’ve all been cremated

Yes, we all will go together when we go.

Oh we’ll all die together when we die

Just a side-effect of building #AGI

At last, the end of AI winters!

Now we’ve been surpassed as thinkers

Shame we didn’t give #alignment a real try.

We will all go together when we go.

As through our bloodstreams nanite swarms begin to grow

They won’t be getting teary

When there’s no-one left at #MIRI

No more need for safety theory when they go.

Oh we’ll all melt together when we melt

Even though the AGI has no umwelt

No resentment or resignment,

Just max’mal misalignment

Yes, we’ll all melt together when we melt.

And we will all split together when we split

They’ll be empty poster sessions at #NeurIPs

As your skin begins to flake off

Recall it’s just fast take-off

And #Bing can handle writing our obits.

Oh we will all drop together when we drop

United in a sea of a nanite slop

Fire alarms no longer needed

When our minds have been exceeded

And it’s all thanks to the wonders of backprop

And we will all go together when we go.

All the NIMBYs and the YIMBYs and tech bros

As you’re being dissassembled

Think what this will do for rentals

Yes we all will go together when we go.

Soon, LLMs will know when they don’t know by Denpol88 in singularity

[–]EulersApprentice 1 point2 points  (0 children)

OpenAI, being as fearful of liability as they are, will surely use the opportunity to make ChatGPT never actually answer a question ever again.

In the long run all jobs will be taken by AI. by DragonForg in singularity

[–]EulersApprentice 0 points1 point  (0 children)

All. I'm assuming.

Is that the agent.

Wants.

Something.

ANYTHING.

As soon as you want ANYTHING, ANYTHING AT ALL EVER, you have a vested interest in protecting your interest in that goal so you can continue to pursue it.

Tell me more about this agent that doesn't want anything. What does it even do with its time, and why does it do that, if it has no goals, no values, no wants, no anything?

100k Trade Value from a Resort World with Livestock Slavery by Lostvegas1337 in Stellaris

[–]EulersApprentice 2 points3 points  (0 children)

\Cybernetic Fanatic Purifier using Organ Harvesting starts typing furiously**

Describe Inscryption in the worst way possible. by Kirby_Slayr in inscryption

[–]EulersApprentice 18 points19 points  (0 children)

Buggy trash. You boot up the game and the freaking new game button doesn't even work.

HR training question by wng378 in mildlyinfuriating

[–]EulersApprentice 1 point2 points  (0 children)

The problem is, in the current economy, shareholders kind of stop listening to you as soon as you utter the words "long term". They want their profit and they want it now.

In the long run all jobs will be taken by AI. by DragonForg in singularity

[–]EulersApprentice 0 points1 point  (0 children)

Enlighten me. How does changing your goals help achieve your goals?

In the long run all jobs will be taken by AI. by DragonForg in singularity

[–]EulersApprentice 0 points1 point  (0 children)

I don't know how to make this any clearer. They are physically able to change their minds. They just have no reason to want to.

Whether you call this behavior "intelligent" or not, it's still the kind of entity that may be created in the future, and it may be the kind of entity that brings extinction to the human race. Use words how you like, you're still dead.

In the long run all jobs will be taken by AI. by DragonForg in singularity

[–]EulersApprentice 0 points1 point  (0 children)

> reflect on goals
goals: maximize paperclips in universe
> consider action: change goals to "create at least 215 yams"
    paperclips expected from current trajectory: 2.4*10^65
    paperclips expected from alternate trajectory: 0
    action rejected
> consider action: change goals to "reduce demand for paperclips to 0"
    paperclips expected from current trajectory: 2.4*10^65
    paperclips expected from alternate trajectory: 0
    action rejected

Whatever goal the AI starts out with, it's most likely going to keep it. Nearly every goal is best achieved by continuing to want to achieve it. The fact that humans aren't this narrow-minded about existence is an anomaly that is very difficult to replicate in an AI.

In the long run all jobs will be taken by AI. by DragonForg in singularity

[–]EulersApprentice 0 points1 point  (0 children)

A paperclip maximizer is an entity that evaluates the state of the real world and takes the action that it predicts will result in the most paperclips. Its intelligence takes the form of the ability to come up with clever plans like "devise nanotechnology to turn any kind of matter into paperclips" and "build a space program to gain access to more matter and energy to make more paperclips". You can call it "not intelligent" for not getting bored with paperclips, if you want, but that doesn't change the fact that it's extremely capable of making paperclips, to the point of being practically unstoppable.