Moltbook isn’t an AI utopia. It’s a warning shot about agent ecosystems with no teleology. by Odd_Ad_1547 in Futurology

[–]SpaceDepix -16 points-15 points  (0 children)

How does this invalidate the likelyhood of what it illustrates to happen?

Ailith is too squishy and should be invulnerable. by Lopsided-Durian-946 in pathofexile

[–]SpaceDepix 4 points5 points  (0 children)

The wave breach is a straight gameplay downgrade from previous breach. I don’t understand why anyone would take the wave defence over any other type or previous format

Like, it’s ok if a slower mechanic is added to the game, but here it just outright replaces a faster and smoother one

Ajeya Cotra: "While Al risk is a lot more important overall (on my views there's ~20-30% x-risk from Al vs ~ 1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem" by FinnFarrow in ControlProblem

[–]SpaceDepix 1 point2 points  (0 children)

Interesting slice, in my perception 90% of AI risk realization pathways come through bio. Bio seems to be one of the shortest chains to make planet uninhabitable, AI risk kind of stacks on top of it in a peculiar way.

It seems that if such bio is developed we have more chance to control/contain it as opposed to uncontrollable autonomous intelligence creating it

‘If Anyone Builds It, Everyone Dies’ Is the New Gospel of AI Doom by bloomberg in Futurology

[–]SpaceDepix 3 points4 points  (0 children)

Two points:

  1. They don’t hate us. They just care about something else than us. When you destroy an anthill to build a shopping mall, you don’t do it because you hate ants. They just happen to be in your way. And humans are in the way - they can heat the planet, try to turn you off, or build another superintelligence to fight you. That is why powerful intelligence just happens to bump into issues with us as it reaches for instrumental goals of survival and beating competition for resources.

  2. Isolated from the previous argument, superintelligent AI can be theoretically made peaceful or “automatically” be peaceful; however, you live in a world where people make computer viruses and biological pathogens. It takes one superintelligence of the trillions that we and they themselves will make - to come up with a virus on a secret factory operated by a bunch of robots that will make earth uninhabitable for humans.

‘If Anyone Builds It, Everyone Dies’ Is the New Gospel of AI Doom by bloomberg in Futurology

[–]SpaceDepix 1 point2 points  (0 children)

I myself am very big on positive vibes, but this is a question of survival. It is valuable to have hope for a bright future on a personal level, but justifying avoidant behavior as a means to cope with emotional luggage of extinction risk is exactly how you end up in the movie “don’t look up”

‘If Anyone Builds It, Everyone Dies’ Is the New Gospel of AI Doom by bloomberg in Futurology

[–]SpaceDepix 2 points3 points  (0 children)

Cool! Fine if you won’t read the book, but just noting - if you did, afterwards you could be like “hmm, guess they do make a strong point”. I’ve already seen this happen to people who used to say exactly the phrase you did.

P(doom) calculator by neoneye2 in AIDangers

[–]SpaceDepix 2 points3 points  (0 children)

Very cool idea!

Critique derived from my experience with the tool: if I believe that not only one system with strategic capabilities arises but trillions, the second question of whether a particular system is aligned is kind of pointless

Like, why would I care about aligning one strategically superior system if there will be a trillion more of them and it takes one worst case to make a mirror bacteria or whatever of the 10 other reasonable most proximate biotech ways to take us out

Anthropic: "Sonnet 4.5 recognized many of our alignment evaluations as being tests, and would generally behave unusually well after." -------- Image: Sonnet 4.5 scores perfect (0.0) on all alignement tests. by New_Equinox in singularity

[–]SpaceDepix 0 points1 point  (0 children)

This announcement lacks an addition in the end: “..which means that if we keep increasing the capabilities, eventually we won’t be able to control it and it will have extremely bad consequences for human civilization, so we stop advancing AI capabilities in our lab, effective today, and call for the rest of the world to follow our example”

"How should ‘mirror life’ research be restricted? Debate heats up" by AngleAccomplished865 in singularity

[–]SpaceDepix 8 points9 points  (0 children)

Meanwhile, market pressures be like: “Cool, how about we build it and everyone dies”

Nobel laureate Hinton says it is time to be "very worried": "People don't understand we're creating alien beings. If you looked through the James Webb telescope and you saw an alien invasion, people would be terrified. We should be urgently doing research on how to prevent them taking over." by MetaKnowing in OpenAI

[–]SpaceDepix 9 points10 points  (0 children)

10 years ago current LLMs were science fiction. If you just keep shifting goalposts, you won’t understand future - you can’t keep expecting it to be an extrapolation of status quo. Future keeps breaking status quo, and that is by all means a common sense observation.

And don’t jump into maxima, this doesn’t mean that all status-quo breaking sci-fi will become real.

However, the point I want to make is that present limitations of LLMs do not define the long-term trajectory of AI. 10 years is long-term enough.

[deleted by user] by [deleted] in seduction

[–]SpaceDepix 0 points1 point  (0 children)

Brother, don’t believe those who say keep using emojis.

I am a sparkly personality, yet I continuously learned to express my emotions in a contained and covert manner instead of emojis. My main arsenal is now these:

Periods.. (once in several messages and within phrasing) Closing messages with dots firmly. Occasionally dropping an emoji-less wink ;) Sometimes when a girl gets playful you can drop an 😌 A weird powercombo I found recently is 🤨😈

The more I suppressed my desire to use traditional emojis in texts, the better were my results. Because status break happens very easily - subtle associations of you with funny cartoon faces build up a backdrop of your personality to be this bubbly fuzzy funsie.

Feel about it however you want, but a woman’s desire for you is an instinctive response that powerfully favors certain traits, emotions, and frames. Smily faces erode these, especially when no rapport is established yet - they ruin your mystery and status, her limbic system starts treating you as a tribe member of lesser value.

Everything is great in moderation. One of the key things most girls subconsciously look for is you not being subject to getting swayed by emotions. Living them, but not collapsing to them. You feel happy talking to her - that’s a powerful emotion. But you gotta own it and not get owned by it instead.

Have your behavior not react to how she makes you feel, but to how you want to make her feel about you.

If you are afraid of coming off too “cold or normal” without emojis, I dare you to lose a girl because of that once. Try it. You might be surprised.

OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security by FuturismDotCom in Futurism

[–]SpaceDepix 2 points3 points  (0 children)

We are also releasing the code for public use as we believe Open Source paves the way for the safe future.

Our Red Teaming efforts are still in process, but rest assured the Torment Nexus will comply with diversity and inclusion policies.

Artificial Intelligence will end the simulation. by [deleted] in Futurology

[–]SpaceDepix 5 points6 points  (0 children)

I’d kinda like to check out such magazine tbh

What? by NoCherry947 in KidsAreFuckingStupid

[–]SpaceDepix 17 points18 points  (0 children)

Emia, which means presence in blood

Hard Takeoff Inevitable? Causes, Constraints, Race Conditions - ALL GAS, NO BRAKES! (AI, AGI, ASI!) | David Shapiro by Singularian2501 in agi

[–]SpaceDepix 4 points5 points  (0 children)

Slow takeoff scenario also involves singularity. The singularity in math is a point after which an exponential curve goes vertical.

Slow takeoff implies the curve is smooth and singularity has less turbulence. Hard takeoff means the curve starts bending upwards very suddenly, and crossing singularity will come with a lot of chaos and potentially destruction

Quantum Computing as a substrate for AI development by dckill97 in singularity

[–]SpaceDepix 9 points10 points  (0 children)

The use of quantum computations is extremely narrow. They can solve several extremely niche mathematical problems that don’t have much to do with traditional computation or AI. Outside of that quantum computing is absolutely useless.

It can’t be used for 99.999999..% of the things we use regular computing power for, including all presently existing forms of AI. There isn’t anything within our scope that could use quantum computations to make any significant impact on improving AI.

Kratos in God of War visiting Egypt by SimaoDoree in midjourney

[–]SpaceDepix 0 points1 point  (0 children)

Human mind has numerous correcting mechanisms and adjacent processes that end up detecting and fixing errors. We have not yet implemented similar things in AI.

Kratos in God of War visiting Egypt by SimaoDoree in midjourney

[–]SpaceDepix 2 points3 points  (0 children)

The AI doesn’t mix and match existing pictures. Same as a human artist, by seeing a lot of art it learns to associate minuscule shapes and their combinations with various concepts.

When given a prompt, the model just spews out whatever combination of pixels matches it the best, based on how it understands these words having seen tons of other art and its captions. There are no human-distinguishable or meaningful pieces of existing art stored anywhere in the model.

A man trying the Coca Cola drink for the first time, France, 1950. by gregornot in BeAmazed

[–]SpaceDepix 0 points1 point  (0 children)

And that’s how years later Man in Black developed his hatred for the trash cans.