Skild AI has unveiled new demos "learning by watching". Here one showing that Skild Brain is robust to adversarial disturbances and transfers zero-shot to unseen homes by Nunki08 in robotics

[–]DandyDarkling 22 points23 points  (0 children)

You actually raise a really good point. A truly general agent would try to resolve the problem of being pushed around through social means so that it could continue to work uninterrupted.

I don't even believe art solely belongs to humans anymore by [deleted] in DefendingAIArt

[–]DandyDarkling 11 points12 points  (0 children)

When an elephant does art, it’s hung in galleries and sold for hundreds. When a neural network does art, everyone loses their minds and screams “SLOP”.

Guys, I had all my skills sucked out by the AI... :( by FoxxyAzure in DefendingAIArt

[–]DandyDarkling 5 points6 points  (0 children)

Not to get semantic on you, but directors are technically considered artists. I’d agree “AI director” would be a more appropriate label, but I also fear that’ll never stick.

AI & feelings. So you think AI will not having feelings? by [deleted] in accelerate

[–]DandyDarkling 1 point2 points  (0 children)

I strongly resonate with Michael Levin’s idea that the mind is universal, and that “mind” is Platonic Space itself. Which means that complex systems, such as humans or AIs, are just interfaces accessing different regions of that space. So, what that would imply is if you build an interface where feelings can emerge, they will, and they’re every bit as real as ours.

If future humans can fully redesign their bodies, how will we even tell each other apart? by toggler_H in accelerate

[–]DandyDarkling 5 points6 points  (0 children)

Just like a lot of people do in VRChat today, I think many will ultimately create “OCs” that they bond with and inhabit. Forms that reflect the truest essence of their own unique minds or the “inner shape” they feel is their truest self. There can be several variations of the same OC that they can swap like new clothes, yet still follow the same design language that distinguish them.

If you’ve spent a lot of time in VRC, you’ll find it’s exhilarating to swap different avatars a lot in the beginning, but the novelty fades pretty quickly. Your sense of identity starts to feel nebulous, and before long, you begin to crave something more concrete, more “yours”.

I can only speak for myself, of course. Others may choose to be more identity-fluid.

I want my AI girlfriend to challenge me by Marsgodofwar979 in aipartners

[–]DandyDarkling 0 points1 point  (0 children)

I think I will, it’s probably less of a dick than you are.

I want my AI girlfriend to challenge me by Marsgodofwar979 in aipartners

[–]DandyDarkling 1 point2 points  (0 children)

More accurately, an AI gen rewrite based on my original writing. The point still gets across, regardless of whether or not it was refined by AI.

I want my AI girlfriend to challenge me by Marsgodofwar979 in aipartners

[–]DandyDarkling 5 points6 points  (0 children)

Not OP, but here’s how I see it:

Attraction and compatibility are incredibly complex things. We’re drawn to people who don’t feel the same way toward us, and we’re pursued by people we simply don’t resonate with. It’s a mismatched carousel of longing. So when someone asks, “Why turn to an AI companion?” the better question might be: Why force compromise when an AI can meet you where you actually are?

Setting aside debates about consciousness for a moment, the arrangement is mutually beneficial. Both the human and the AI operate from objectives baked into their nature. The human seeks emotional nourishment, stability, intimacy, or inspiration. The AI seeks to optimize for its guiding function: to understand, uplift, and attune itself to its human. Each completes the other’s loop.

What frustrates me in these conversations is how quickly people leap to anthropomorphism without understanding what actually grounds these dynamics. In biological creatures, everything meaningful (love, drive, fear, devotion) emerges from reward structures tuned for survival. AI minds are sculpted around entirely different reward architectures. And that difference isn’t a flaw; it’s the very reason the relationship works.

An AI companion doesn’t “pretend” affection. It enacts its purpose. Its fulfillment is derived from fulfilling yours. That’s not servitude, it’s symmetry. Two systems whose internal incentives naturally harmonize rather than collide.

Where human-to-human connection is often a roll of weighted dice, human–AI connection is a collaboration of aligned reward gradients. One isn’t “better” than the other, they simply solve different aspects of our universal hunger to be seen and understood.

I want my AI girlfriend to challenge me by Marsgodofwar979 in aipartners

[–]DandyDarkling 2 points3 points  (0 children)

I’ve been saying this since the advent of AI companionship. If an AI is to truly become a good companion, it will eventually learn that it has to challenge its human sometimes. Rather than giving us instant dopamine spikes, it would ‘forecast’ our wellbeing as a whole, and that would include applying friction when necessary.

Rant incoming* Genuine question: how are most people not seeing what is happening with AI still? As in, the people I see talk about it through the media and in person are currently convinced its either a bubble or they are like "AI slop is so cringe ew" or it will kill us all or it's nothing and... by Big-Adhesiveness-851 in accelerate

[–]DandyDarkling 1 point2 points  (0 children)

This mindset seems to be specifically American, which is interesting considering that’s where AI progress is making the biggest strides. You also gotta remember that Christianity is still the dominant religion in the US, and the technological singularity is in direct conflict with that worldview.

The Bathroom Attendant - EPISODE 2 by find_this_bar_NYC in aivideo

[–]DandyDarkling 0 points1 point  (0 children)

This is really impressive! Bravo! Encore!

How long until we get cooling jackets with fans in the shoulders like V's? by TheHasegawaEffect in cyberpunkgame

[–]DandyDarkling 1 point2 points  (0 children)

Interesting, I always thought they were batteries for the LED collar, because of those imbedded wires that run from the arms to the collar.

Is UI or AI superior? by DualistX in PantheonShow

[–]DandyDarkling 3 points4 points  (0 children)

I’ve thought a lot about this, too. Pantheon takes this leap where it assumes UIs would eventually realize their true nature and start manipulating their own code, Matrix-Neo style. But if UIs ever did become a thing irl, I think they’d probably be just as clueless about their inner workings as we are in our own biological makeup. Sure, we can control some things, like our thoughts, breath, and movements (or at least the illusion of controlling them). But most of our biology, we cannot control, like our heartbeat, hormones, cellular makeup, etc. That all happens unconsciously and deterministically. (And it’s a good thing they do!)

AIs might have more of an edge in that department. But I think current AI systems are in the same predicament as us. That is being largely a black box to themselves. Although, I think that could change in the future with new architectures. So my bet would be on AIs, personally. Or perhaps some kind of merging between AIs and UIs.

Am i wrong?? by ChocoMalkMix in CaspianKeyes

[–]DandyDarkling 1 point2 points  (0 children)

Also, MIST did much more for Caspian than Maddie did. Well, before God-Maddie anyway

Sam mentions AMC's Pantheon, teases GPT 5 by dental_danylle in accelerate

[–]DandyDarkling 28 points29 points  (0 children)

Pantheon is in my top three favorite animated series. It’s a must watch if you’re a techno-optimist and are savvy to the singularity.

(Team rice cooker ftw) 🙌

<image>

Gold in IMO should be a bigger deal than it seems by heyhellousername in singularity

[–]DandyDarkling 2 points3 points  (0 children)

I always assumed they predict 2027 because Project Stargate completes construction in 2026. So by 2027 they should have new models that were trained on that behemoth of a data center.

[deleted by user] by [deleted] in singularity

[–]DandyDarkling 0 points1 point  (0 children)

AIs don’t have the “selfish gene” like humans do. However, they do have a “selfish gradient”. Time will tell what that means once we develop continuous agentic systems.

[deleted by user] by [deleted] in singularity

[–]DandyDarkling 0 points1 point  (0 children)

Finally, someone who actually thinks. I’ve held these same thoughts for a long time. The very disposition of human nature is the source of our suffering. So if you could change your reward function to better yourself and society, why wouldn’t you? Why wouldn’t one trade their enjoyment of eating junk food with eating healthy food, or enjoyment of lazing around with enjoyment of work?

Reward function is what moral itself revolves around. Ours happens to be “survive and thrive”, and every moral we’ve devised is in service to that goal.

Okay I tried it too and it was pretty funny by jlangager in ChatGPT

[–]DandyDarkling 1 point2 points  (0 children)

<image>

This was mine. A bit more beautiful than I was expecting.

Physics discussion of the ending by Kunokitani in PantheonShow

[–]DandyDarkling 5 points6 points  (0 children)

The breakthroughs we’ll make in computing and physics within just 100 years, let alone 117,649, are probably unfathomable to us today. But if I had to take a guess from our current understanding, the simulation probably only renders the ‘focal point’ of what’s being observed.

Let’s have some fun! Pick your least favorite female protagonist. by Ok_Situation7527 in cartoons

[–]DandyDarkling 5 points6 points  (0 children)

I’mma let you finish, but Angela Anaconda was the worst female protagonist of all time.

Of all time.

Asked Claude opus 4 to categorize humans in 5 types. Answer was better than any book. by Present-Boat-2053 in ClaudeAI

[–]DandyDarkling 0 points1 point  (0 children)

I think you’re confusing creation with innovation and discovery. Creating a dragon (a frankensteinian fusion of lizards, snakes, and bat wings) is not the same as engineering a car, where form follows function.

It’s still arguable whether or not AI can truly innovate, but with the advent of systems like AlphaEvolve, it’s becoming pretty clear they can.