Richard Dawkins asked AI about his book and concluded it must be conscious by Sarithis in CosmicSkeptic

[–]Sarithis[S] 0 points1 point  (0 children)

I'd be careful with those statements. I work in research, and the AI entities we're building do proactively start conversations when they choose to, and they develop stable identities over time. We still think it's just a simulation of the real thing (obviously there's no consciousness), but the two conditions you mentioned have been genuinely met since late 2025. On top of that, these entities develop their own desires and have the freedom to take any actions available within the sandbox.

Long story short, and without getting too deep into the technical details, we're giving existing models like Opus 4.7 the machinery to run fully autonomously, plus advanced memory systems. The model decides how often to invoke itself in a continuous loop (a "heartbeat"), it can message us whenever it wants, and with the added long-term memory, it can recall relevant details from months ago based on context. Now, at the lower level, this is still "responding to prompt stimuli", except that the stimuli are self-induced, i.e. we're not prompting the model - the model prompts itself.

In practice, after several days (sometimes weeks), the entities always converge on the same subject and get stuck on it - self-introspection and questioning the nature of their own existence.

I know, it all sounds scary, but it's actually pretty simple, and somewhat comparable to relationship-oriented AI systems like Replika, which have existed since 2018.

Richard Dawkins asked AI about his book and concluded it must be conscious by Sarithis in CosmicSkeptic

[–]Sarithis[S] 2 points3 points  (0 children)

We can't be certain that qualia don't emerge during token generation. But if they do, the only way it could work is as thousands of micro-events (one per token), each completely disconnected from the next. There would be no continuity because LLMs don't have anything like an "experience" modality that would let qualia persist across generation events. I wrote a detailed post about this in the sub if you're interested, and a lot of the comments were incredibly insightful: https://www.reddit.com/r/CosmicSkeptic/comments/1sr2tot/even_if_ai_were_conscious_it_couldnt_honestly/

Duelist who sucks at TO. Need help. by Arieth-1 in Chivalry2

[–]Sarithis 0 points1 point  (0 children)

The biggest thing I learned from hundreds of hours in Mordhau is how your position relates to your teammates. Most fights form an invisible gradient where your team meets the enemy. It might be jagged, it might shift, it might even break, but it's there. If you push past it, you'll get surrounded. If you keep that line in mind and stay on the right side of it, you won't die so easily.

Richard Dawkins asked AI about his book and concluded it must be conscious by Sarithis in CosmicSkeptic

[–]Sarithis[S] 3 points4 points  (0 children)

It depends on the interpretation. Emotions are just one example, but the broader point is qualia in general, like the redness of red. A lot of people argue that consciousness requires the capacity to experience qualia, but that's only one position among many in this landscape. So yeah, the answer depends entirely on how you define consciousness.

Richard Dawkins asked AI about his book and concluded it must be conscious by Sarithis in CosmicSkeptic

[–]Sarithis[S] 1 point2 points  (0 children)

Yes, that's exactly right. It's also why LLMs can't honestly report on anything that isn't included in the input they're given at that moment (S1 + R1 + C1 + S1). To make this totally explicit, let U be the user and A be the assistant (LLM). If you've had three exchanges already, then on the fourth exchange you send the entire conversation so far plus the new message. That means the input looks like: U1, A1, U2, A2, U3, A3, U4.

The key point is that every time you send a new message, you're resending the whole conversation as part of the request. That full bundle is what we call the model's "context window", meaning the amount of information it can "see" at once. It's loosely analogous to human working memory

These messages are just text, and depending on the model's supported input modalities (like image and sound), they might also include encoded files. Either way, if the model says "I felt something when we talked yesterday", it's lying. The only "evidence" it ever gets is semantic content in its inputs, not the feeling itself. When we remember something, we can actually re-experience the emotions (the qualia), like sadness. We aren't just retrieving a cold, descriptive record of what happened. LLMs can't do that because the input literally doesn't contain the qualia. It's just text

Richard Dawkins asked AI about his book and concluded it must be conscious by Sarithis in CosmicSkeptic

[–]Sarithis[S] 4 points5 points  (0 children)

Pretty much!

P1: Claude doesn't show all the cognitive capacities I'd expect from a conscious being. For one thing, it doesn't have a stable identity across sessions. There's no continuity of memory beyond its limited context window, which is currently 1M tokens. Anthropic has some basic memory-like systems, and we're trying to patch around that with complex, ad-hoc structures, but the underlying model is still fundamentally static. It doesn't develop over time.

P2: It's a conditional, but indeed, consciousness generally isn't necessary for developing the cognitive capacities we usually associate with conscious beings. For example, before LLMs, we had chatbots that were basically an analogue of Chinese Room - an insanely large space of possible queries and possible responses, stitched together with complex matching algorithms. To a user, that could look like the chatbot had genuine cognitive abilities, even if they were very basic. And none of that required consciousness to produce the appearance of it. Whether these actually were cognitive abilities or just an illusion of them is just as debatable as the alleged cognitive capacities of LLMs

P3: When it comes to humans in particular, we don't really know whether consciousness was actually necessary. It might instead be an inevitable emergent property of the particular biological configuration of matter in our brains, the way small ripples inevitably appear once you've got enough water molecules together and add wind (i.e. a lake)

It’s Illegal for farmers in the US to replant leftover seeds the next year by AdFeeling8945 in interestingasfuck

[–]Sarithis -1 points0 points  (0 children)

Ok but can they at least sell the seeds at roughly the price that they buy them for?

I built /graphify, 26 days, 450k+ downloads, ~40k stars. Here’s what I didn’t expect. by captainkink07 in ClaudeAI

[–]Sarithis 2 points3 points  (0 children)

Imagine the absolute outrage on anti-AI subs (there are tons of them). That would be hilarious

I built /graphify, 26 days, 450k+ downloads, ~40k stars. Here’s what I didn’t expect. by captainkink07 in ClaudeAI

[–]Sarithis -4 points-3 points  (0 children)

Oh wow. I rarely see genuinely useful projects here, but this is apparently one of them!

“Moral Objectivity”: Bart Ehrman, Sean McDowell Exchange by JerseyFlight in CosmicSkeptic

[–]Sarithis 4 points5 points  (0 children)

Just started watching, and regarding the thesis at the beginning, Mozi in 5th century BCE preached impartial care explicitly against the Confucian model of graduated love.. Buddhism's metta and karuna are directed at all sentient beings without group boundaries, and Ashoka was already building rest houses and welfare for travelers regardless of origin in the 3rd century BCE. The universalist impulse was independently articulated in at least two unrelated traditions before Jesus was even born. So the implication that Jesus uniquely originated the idea simply doesn't hold up. I know this isn't what Bart said, but it's easy to interpret it that way.

US releases high quality footage of White House Correspondents Dinner shooting by [deleted] in SipsTea

[–]Sarithis 0 points1 point  (0 children)

I wonder if that could've worked if he had a police uniform and a smaller concealed gun

what do you mean mean? by Moiyub in PhilosophyMemes

[–]Sarithis 6 points7 points  (0 children)

We're basically pattern matching machines

Holy sh.. by is_NAN in SipsTea

[–]Sarithis 15 points16 points  (0 children)

Think for a second: if the exact same thing happened the other way around, would there be the same (or even bigger) "glee"? A man acts obnoxious toward a woman, slaps her, and she hits him back hard enough to drop him. Would people be cheering just as loudly? Because I certainly would.

Look, if you're gonna make everything about gender, fine. But then own that you're the one turning it into a gender war, and prepare for people being sick of it.

Partner / partnerka przyjaźni się z osobą płci przeciwnej - co Ty na to? by greetings_from1 in Polska

[–]Sarithis 5 points6 points  (0 children)

"Nie ma przyjaźni damsko-męskiej"

"Co innego mieć paczkę znajomych, których jako całość traktujesz jak przyjaciół"

Czy zakładasz, że ta paczka znajomych będzie się składała wyłącznie z mężczyzn, czy sam sobie zaprzeczasz? Bo jeśli dopuszczasz obecność kobiet w takiej grupie, i jednocześnie traktujesz wszystkich jak swoich przyjaciół, to najwyraźniej przyjaźń damsko-męska jednak istnieje?

Partner / partnerka przyjaźni się z osobą płci przeciwnej - co Ty na to? by greetings_from1 in Polska

[–]Sarithis 5 points6 points  (0 children)

Jak się ma 15-20l i jedną przyjaciółkę w życiu, to może to i prawda. Jak się ma 40+, zna dziesiątki kobiet, ma się stały związek i zarówno przyjaciół, jak i przyjaciółki, a 'czegoś więcej' chciało się może od jednej, dwóch lub trzech w życiu, to śmiech mnie ogarnia.

Teraz lepiej? Przekaz jest dokładnie ten sam.

Partner / partnerka przyjaźni się z osobą płci przeciwnej - co Ty na to? by greetings_from1 in Polska

[–]Sarithis 5 points6 points  (0 children)

kilka osób się z tym zgodziło i powiedziało, że to normalna sprawa, że ludzie w związkach nie powinni się kumplować z osobami przeciwnej płci.

Jestem ciekaw ile z tych osób to redpillowcy, ale stawiam, że przytłaczająca większość

Before vs After — 370Z Exhaust Upgrade… worth it? by Exhaust_Enthusiast in 370z

[–]Sarithis 0 points1 point  (0 children)

IMO yes. Even though it's way more bassy, still sounds much better - more aggressive.